scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems in 2002"


Journal ArticleDOI
TL;DR: The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment and examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected.
Abstract: Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.

7,925 citations


Journal ArticleDOI
TL;DR: This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity and shows that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.
Abstract: Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.

1,765 citations


Journal ArticleDOI
TL;DR: This work presents a fuzzy TOPSIS model under group decisions for solving the facility location selection problem, where the ratings of various alternative locations under different subjective attributes and the importance weights of all attributes are assessed in linguistic values represented by fuzzy numbers.
Abstract: This work presents a fuzzy TOPSIS model under group decisions for solving the facility location selection problem, where the ratings of various alternative locations under different subjective attributes and the importance weights of all attributes are assessed in linguistic values represented by fuzzy numbers. The objective attributes are transformed into dimensionless indices to ensure compatibility with the linguistic ratings of the subjective attributes. Furthermore, the membership function of the aggregation of the ratings and weights for each alternative location versus each attribute can be developed by interval arithmetic and α -cuts of fuzzy numbers. The ranking method of the mean of the integral values is applied to help derive the ideal and negative-ideal fuzzy solutions to complete the proposed fuzzy TOPSIS model. Finally, a numerical example demonstrates the computational process of the proposed model.

290 citations


Journal ArticleDOI
TL;DR: A heuristic algorithm based on rough entropy for knowledge reduction is proposed in incomplete information systems, the time complexity of this algorithm is O(|A|2|U|).
Abstract: Rough set theory is emerging as a powerful tool for reasoning about data, knowledge reduction is one of the important topics in the research on rough set theory. It has been proven that finding the minimal reduct of an information system is a NP-hard problem, so is finding the minimal reduct of an incomplete information system. Main reason of causing NP-hard is combination problem of attributes. In this paper, knowledge reduction is defined from the view of information, a heuristic algorithm based on rough entropy for knowledge reduction is proposed in incomplete information systems, the time complexity of this algorithm is O(|A|2|U|). An illustrative example is provided that shows the application potential of the algorithm.

161 citations


Journal ArticleDOI
TL;DR: It will be shown that preservation of T-transitivity is closely related to the domination of the applied aggregation operator over the corresponding t-norm T, and basic properties for dominating aggregation operators, not only in the case of dominating some t- norm T, but dominating some arbitrary aggregation operator, will be presented.
Abstract: Aggregation processes are fundamental in any discipline where the fusion of information is of vital interest. For aggregating binary fuzzy relations such as equivalence relations or fuzzy orderings, the question arises which aggregation operators preserve specific properties of the underlying relations, e.g. T-transitivity. It will be shown that preservation of T-transitivity is closely related to the domination of the applied aggregation operator over the corresponding t-norm T. Furthermore, basic properties for dominating aggregation operators, not only in the case of dominating some t-norm T, but dominating some arbitrary aggregation operator, will be presented. Domination of isomorphic t-norms and ordinal sums of t-norms will be treated. Special attention is paid to the four basic t-norms (minimum t-norm, product t-norm, Lukasiewicz t-norm, and the drastic product).

121 citations


Journal ArticleDOI
TL;DR: A generalization of the concept of symmetric fuzzy measure based in a decomposition of the universal set in what is called subsets of indifference, which is based on the Choquet integral.
Abstract: In this paper we propose a generalization of the concept of symmetric fuzzy measure based in a decomposition of the universal set in what we have called subsets of indifference. Some properties of these measures are studied, as well as their Choquet integral. Finally, a degree of interaction between the subsets of indifference is defined.

93 citations


Journal ArticleDOI
TL;DR: The sum normal constraint is applied in this paper to both gradient descent optimization and Kalman filter optimization of fuzzy membership functions.
Abstract: Given a fuzzy logic system, how can we determine the membership functions that will result in the best performance? If we constrain the membership functions to a certain shape (e.g., triangles or trapezoids) then each membership function can be parameterized by a small number of variables and the membership optimization problem can be reduced to a parameter optimization problem. This is the approach that is typically taken, but it results in membership functions that are not (in general) sum normal. That is, the resulting membership function values do not add up to one at each point in the domain. This optimization approach is modified in this paper so that the resulting membership functions are sum normal. Sum normality is desirable not only for its intuitive appeal but also for computational reasons in the real time implementation of fuzzy logic systems. The sum normal constraint is applied in this paper to both gradient descent optimization and Kalman filter optimization of fuzzy membership functions. The methods are illustrated on a fuzzy automotive cruise controller.

81 citations


Journal ArticleDOI
TL;DR: A hybrid model is developed which provides a unified framework for fuzzy default reasoning and can be used as a basis for information sharing and exchange in knowledge-based multi-agent systems for practical applications such as automated group negotiations.
Abstract: This paper develops a hybrid model which provides a unified framework for the following four kinds of reasoning: 1) Zadeh's fuzzy approximate reasoning; 2) truth-qualification uncertain reasoning with respect to fuzzy propositions; 3) fuzzy default reasoning (proposed, in this paper, as an extension of Reiter's default reasoning); and 4) truth-qualification uncertain default reasoning associated with fuzzy statements (developed in this paper to enrich fuzzy default reasoning with uncertain information). Our hybrid model has the following characteristics: 1) basic uncertainty is estimated in terms of words or phrases in natural language and basic propositions are fuzzy; 2) uncertainty, linguistically expressed, can be handled in default reasoning; and 3) the four kinds of reasoning models mentioned above and their combination models will be the special cases of our hybrid model. Moreover, our model allows the reasoning to be performed in the case in which the information is fuzzy, uncertain and partial. More importantly, the problems of sharing the information among heterogeneous fuzzy, uncertain and default reasoning models can be solved efficiently by using our model. Given this, our framework can be used as a basis for information sharing and exchange in knowledge-based multi-agent systems for practical applications such as automated group negotiations. Actually, to build such a foundation is the motivation of this paper.

55 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to provide an alternative method for improving consistency and show how it can be applied to pairwise comparison matrices.
Abstract: The Analytic Hierarchy Process provides the decision maker with a method for improving the consistency of pairwise comparison matrices. Although it is one of the most commonly used method it presents some disadvantages related generally with the consistency problem. The purpose of this paper is to provide an alternative method for improving consistency and show how it can be applied to pairwise comparison matrices. The contribution to this method and also its limitations are shown at the end.

50 citations


Journal ArticleDOI
TL;DR: The development of a new and general theory of algebraic concepts based on *-fuzzy equalities and strong fuzzy functions is begun, and several fundamental results are established under the name M-vague algebra.
Abstract: In the present work, the development of a new and general theory of algebraic concepts based on *-fuzzy equalities and strong fuzzy functions is begun, and several fundamental results are established under the name M-vague algebra. As a natural implementation of the M-vague algebraic approach, new kinds of arithmetic operations, namely M-vague arithmetic operations, are introduced using an approach different from the traditional approach to fuzzy arithmetic operations, and various kinds of M-vague algebraic properties of M-vague arithmetic operations are investigated.

45 citations


Journal ArticleDOI
TL;DR: This paper first studies qualitative independence relations when uncertainty is encoded by a complete pre-order between states of the world, and investigates the impact of the embedding of qualitative independence Relations into the scale-based possibility theory.
Abstract: The notion of independence is central in many information processing areas, such as multiple criteria decision making, databases, or uncertain reasoning. This is especially true in the later case, where the success of Bayesian networks is basically due to the graphical representation of independence they provide. This paper first studies qualitative independence relations when uncertainty is encoded by a complete pre-order between states of the world. While a lot of work has focused on the formulation of suitable definitions of independence in uncertainty theories our interest in this paper is rather to formulate a general definition of independence based on purely ordinal considerations, and that applies to all weakly ordered settings. The second part of the paper investigates the impact of the embedding of qualitative independence relations into the scale-based possibility theory. The absolute scale used in this setting enforces the commensurateness between local pre-orders (since they share the same scale). This leads to an easy decomposability property of the joint distributions into more elementary relations on the basis of the independence relations. Lastly we provide a comparative study between already known definitions of possibilistic independence and the ones proposed here.

Journal ArticleDOI
Gleb Beliakov1
TL;DR: The basis for splines is selected in such a way that these restrictions take an especially simple form, and the resulting non-negative least squares problem can be solved by a variety of standard proven techniques.
Abstract: The need for monotone approximation of scattered data often arises in many problems of regression, when the monotonicity is semantically important. One such domain is fuzzy set theory, where membership functions and aggregation operators are order preserving. Least squares polynomial splines provide great flexibility when modeling non-linear functions, but may fail to be monotone. Linear restrictions on spline coefficients provide necessary and sufficient conditions for spline monotonicity. The basis for splines is selected in such a way that these restrictions take an especially simple form. The resulting non-negative least squares problem can be solved by a variety of standard proven techniques. Additional interpolation requirements can also be imposed in the same framework. The method is applied to fuzzy systems, where membership functions and aggregation operators are constructed from empirical data.

Journal ArticleDOI
TL;DR: This paper investigates connections between families of triangular norms, triangular conorms and uninorms, finding certain structures ofuninorms admits only the idempotent case.
Abstract: Paper deals with binary operations in unit interval. We investigate connections between families of triangular norms, triangular conorms and uninorms. Certain structures of uninorms admits only the idempotent case.

Journal ArticleDOI
TL;DR: This work deals with the problem of the reliability of quantitative rankings and uses quasi-linear means for providing a more general approach to get priority and antipriority vectors.
Abstract: It is known that in the Analytic Hierarchy Process (A.H.P.) a scale of relative importance for alternatives is derived from a pairwise comparisons matrix A = (aij). Priority vectors are basically provided by the following methods: the right eigenvector method, the geometric mean method and the arithmetic mean method. Antipriority vectors can also be considered; they are built by both the left eigenvector method and mean procedures applied to the columns of A. When the matrix A is inconsistent, priority and antipriority vectors do not indicate necessarily the same ranking. We deal with the problem of the reliability of quantitative rankings and we use quasi-linear means for providing a more general approach to get priority and antipriority vectors.

Journal ArticleDOI
TL;DR: Two classes of software systems that release tabular summaries of an underlying database are described: table servers respond to user queries for (marginal) sub-tables of the "full" table summarizing the entire database, and static releases are characterized by dynamic assessment of disclosure risk, in light of previously answered queries.
Abstract: We describe two classes of software systems that release tabular summaries of an underlying database. Table servers respond to user queries for (marginal) sub-tables of the "full" table summarizing the entire database, and are characterized by dynamic assessment of disclosure risk, in light of previously answered queries. Optimal tabular releases are static releases of sets of sub-tables that are characterized by maximizing the amount of information released, as given by a measure of data utility, subject to a constraint on disclosure risk. Underlying abstractions - primarily associated with the query space, as well as released and unreleasable sub-tables and frontiers, computational algorithms and issues, especially scalability, and prototype software implementations are discussed.

Journal ArticleDOI
TL;DR: The interior-outer-set model for calculating a fuzzy risk represented by a possibility-probability distribution is introduced and it is easy to make a computer program for realizing.
Abstract: In this paper, we introduce the interior-outer-set model for calculating a fuzzy risk represented by a possibility-probability distribution. The model involving combination calculus is very difficult to follow. In this paper, we transform it into a matrix algorithm. Although the algorithm is still difficult to follow, fortunately, it is easy to make a computer program for realizing. This algorithm consists of MOVING-subalgorithm and INDEX-subalgorithm. The former works out leaving and joining matrices. The latter is a combination algorithm to get index sets. An example is presented showing how a user can calculate a risk of strong earthquake with the algorithm.

Journal ArticleDOI
TL;DR: Microaggregation is a technique for the protection of the confidentiality of respondents in microdata releases that releases the averages of small groups in which no single respondent is dominant.
Abstract: Microaggregation is a technique for the protection of the confidentiality of respondents in microdata releases. It is used for economic data where respondent identifiability is high. Microaggregation releases the averages of small groups in which no single respondent is dominant. It was developed for univariate data. The data was sorted and the averages of adjacent fixed size groups were reported. The groups can be allowed to have varying sizes so that no group will include a large gap in the sorted data. The groups become more homogeneous when their boundaries are sensitive to the distribution of the data. This is like clustering but with the number of clusters chosen to be as large as possible subject to homogeneous clusters and a minimum cluster size. Approximate methods based on comparisons are developed. Exact methods based on linear optimization are also developed. For bivariate, or higher dimensional, data the notion of adjacency is defined even though sorting is no longer well defined. The constraints for minimum cluster size are also more elaborate and not so easily solved. We may also use only a triangulation to limit the number of adjacencies to be considered in the algorithms. Hybrids of the approximate and exact methods combine the strengths of each strategy.

Journal ArticleDOI
TL;DR: A new algorithm is introduced that locates 'risky' records in discrete data by first identifying all unique attribute sets (up to a user-specified maximum size) and secondly by grading the 'risk' of each record by considering the number and distribution ofunique attribute sets within each record.
Abstract: Many organizations require detailed individual-level information, much of which has been collected under guarantees of confidentiality. However, simple anonymization procedures, i.e. removing names and addresses, are insufficient for this to be ensured. The records belonging to certain individuals have a high probability of being identified (as their contents, or attributes, are unusual) and therefore have the potential to be recognized spontaneously - such records are referred to as special uniques. Consider, for example, a sixteen-year-old widow in a population survey. Confidentiality of a given dataset cannot be enabled until all special unique records are identified and either disguised or removed. However, to the knowledge of the authors, no exhaustive automated analysis of this nature has been conducted due to the demanding levels of computation and data storage that are required. This paper introduces a new algorithm that locates 'risky' records in discrete data by first identifying all unique attribute sets (up to a user-specified maximum size) and secondly by grading the 'risk' of each record by considering the number and distribution of unique attribute sets within each record. Empirical tests indicate that the algorithm is highly effective at picking out 'risky' records from large samples of data.

Journal ArticleDOI
TL;DR: A simple model that captures user uncertainty can be used to define suitable measures of disclosure risk and data utility to implement existing optimality criteria for the choice of the best form of data release.
Abstract: In this paper we show how a simple model that captures user uncertainty can be used to define suitable measures of disclosure risk and data utility. The model generalizes previous results of Duncan and Lambert. We present several examples to illustrate how the new measures can be used to implement existing optimality criteria for the choice of the best form of data release.

Journal ArticleDOI
TL;DR: A cluster analysis module that is available in a standard statistical software package is used to re-identify persons using German employment statistics and the German Life History Study, showing, that the number of identifiable persons is remarkable high.
Abstract: More and more empirical researchers from universities or research centres like to use register or survey data collected by statistical agencies or the social security system, since these data can by used for several empirical studies, e.g. the analysis of special groups or quantitative effects of economic or social policies. Most of the data required have to be (factually) anonymised before they are disseminated to preserve confidentiality. In the area of statistics on households and individuals this path has been pursued in Germany for several years. The transmission of de facto anonymised datafiles has proved to be a good form of co-operation between scientists and statisticians.Factual anonymity of the data depends on the costs and benefits of a potential reidentification. The paper assumes that the intruder only accepts low costs. Therefore he uses a cluster analysis module that is available in a standard statistical software package to re-identify persons. After a description of the method different factors influencing the re-identification risk are studied using German employment statistics (register data) and the German Life History Study (survey data). The factors are: sample fraction and number of (irrelevant) variables. The results show, that the number of identifiable persons is remarkable high. Furthermore it can be confirmed with the cluster analysis that the number of re-identifiable records increases with increasing sampling fraction and that irrelevant variables reduce this number.

Journal ArticleDOI
TL;DR: It is shown in this paper how to find interval estimates for the original data based on the microaggregated data, which can be considerably narrower than intervals resulting from subtraction of means, and can be useful to detect lack of security in a microaggRegated data set.
Abstract: Microaggregation is a statistical disclosure control technique. Raw microdata (i.e. individual records) are grouped into small aggregates prior to publication. With fixed-size groups, each aggregate contains k records to prevent disclosure of individual information. Individual ranking is a usual criterion to reduce multivariate microaggregation to univariate case: the idea is to perform microaggregation independently for each variable in the record. Using distributional assumptions, we show in this paper how to find interval estimates for the original data based on the microaggregated data. Such intervals can be considerably narrower than intervals resulting from subtraction of means, and can be useful to detect lack of security in a microaggregated data set. Analytical arguments given in this paper confirm recent empirical results about the unsafety of individual ranking microaggregation.

Journal ArticleDOI
TL;DR: A computational method for its solution based on establishment of the auxiliary fuzzy linear programming for each player is proposed and the approach based on the multiobjective programming is established to solve these fuzzylinear programming.
Abstract: The purpose of the paper is to introduce a new type of fuzzy matrix games: fuzzy constrained matrix games. A computational method for its solution based on establishment of the auxiliary fuzzy linear programming for each player is proposed. The approach based on the multiobjective programming is established to solve these fuzzy linear programming. Effectiveness is illustrated with a numerical example.

Journal ArticleDOI
TL;DR: This paper goes deeply into this matter and proposes several possible definitions for the concept of consonant random set and concludes that only one of them seems to be necessary.
Abstract: Different authors have observed some relationships between consonant random sets and possibility measures, specially for finite universes. In this paper, we go deeply into this matter and propose several possible definitions for the concept of consonant random set. Three of these conditions are equivalent for finite universes. In that case, the random set considered is associated to a possibility measure if and only if any of them is satisfied. However, in a general context, none of the six definitions here proposed is sufficient for a random set to induce a possibility measure. Moreover, only one of them seems to be necessary.

Journal ArticleDOI
TL;DR: The definition of general entropy is extended to the countable case for which a sufficient condition of convergence is proved and it is proved that in that case the general entropy possesses the "subset independence" property.
Abstract: The concept of entropy is an important part of the theory of additive measures. In this paper, a definition of entropy is introduced for general (not necessarily additive) measures as the infinum of the Shannon entropies of "subordinate" additive measures. Several properties of the general entropy are discussed and proved. Some of the properties require that the measure belongs to the class of so-called "equientropic" general measures introduced and studied in this paper. The definition of general entropy is extended to the countable case for which a sufficient condition of convergence is proved. We introduce a method of "conditional combination" of general measures and prove that in that case the general entropy possesses the "subset independence" property.

Journal ArticleDOI
TL;DR: HFPNN is a flexible neural architecture whose structure is based on the Group Method of Data Handling (GMDH) and developed through learning and the number of layers of the PNN is not fixed in advance but is generated in a dynamic way.
Abstract: We propose a hybrid architecture based on a combination of fuzzy systems and polynomial neural networks. The resulting Hybrid Fuzzy Polynomial Neural Networks (HFPNN) dwells on the ideas of fuzzy rule-based computing and polynomial neural networks. The structure of the network comprises of fuzzy polynomial neurons (FPNs) forming the nodes of the first (input) layer of the HFPNN and polynomial neurons (PNs) that are located in the consecutive layers of the network. In the FPN (that forms a fuzzy inference system), the generic rules assume the form "if A then y = P(x)" where A is a fuzzy relation in the condition space while P(x) is a polynomial standing in the conclusion part of the rule. The conclusion part of the rules, especially the regression polynomial uses several types of high-order polynomials such as constant, linear, quadratic, and modified quadratic. As the premise part of the rules, both triangular and Gaussian-like membership functions are considered. Each PN of the network realizes a polynomial type of partial description (PD) of the mapping between input and out variables. HFPNN is a flexible neural architecture whose structure is based on the Group Method of Data Handling (GMDH) and developed through learning. In particular, the number of layers of the PNN is not fixed in advance but is generated in a dynamic way. The experimental part of the study involves two representative numerical examples such as chaotic time series and Box-Jenkins gas furnace data.

Journal ArticleDOI
TL;DR: The bounds for two classes of fuzzy implications which are connected with the investigations for contrapositive implications are shown, i.e. functions which satisfy the functional equation I(x, y) = I(N(y), N(x)), with a strong negation N : [ 0,1] → [0,1].
Abstract: Recently, we have examined the solutions of the system of the functional equations I(x, T(y,z)) = T(I(x,y), I(x,z)), I(x, I(y,z)) = I(T(x,y),z), where T: [0,1]2 → [0,1] is a strict t-norm and I : [0,1]2 → [0,1] is a non-continuous fuzzy implication. In this paper we continue these investigations for contrapositive implications, i.e. functions which satisfy the functional equation I(x, y) = I(N(y), N(x)), with a strong negation N : [0,1] → [0,1]. We show also the bounds for two classes of fuzzy implications which are connected with our investigations.

Journal ArticleDOI
TL;DR: This paper constructs general counterexamples which show that none of the above sensitivity rules does adequately reflect disclosure risk if cell contributors or coalitions of them behave as intruders, and proposes an alternative sensitivity rule based on the concentration of relative contributions.
Abstract: In statistical disclosure control of tabular data, sensitivity rules are commonly used to decide whether a table cell is sensitive and should therefore not be published. The most popular sensitivity rules are the dominance rule, the p%-rule and the pq-rule. The dominance rule has received critiques based on specific numerical examples and is being gradually abandoned by leading statistical agencies. In this paper, we construct general counterexamples which show that none of the above rules does adequately reflect disclosure risk if cell contributors or coalitions of them behave as intruders: in that case, releasing a cell declared non-sensitive can imply higher disclosure risk than releasing a cell declared sensitive. As possible solutions, we propose an alternative sensitivity rule based on the concentration of relative contributions. More generally, we suggest to complement a priori risk assessment based on sensitivity rules with a posteriori risk assessment which takes into account tables after they have been protected.

Journal ArticleDOI
TL;DR: A general fixed point theorem for a multi-valued probabilistic q-contraction f : S → 2s is proved, where (S, F, T) is a complete Menger space and F satisfies a growth condition which is connected with the countable extension of a t-norm T.
Abstract: A general fixed point theorem for a multi-valued probabilistic q-contraction f : S → 2s is proved, where (S, F, T) is a complete Menger space and F satisfies a growth condition which is connected with the countable extension of a t-norm T. As a corollary a generalization of Tardiff's result27 is obtained. A random fixed point result is proved, where a measure space related to decomposable measure is used.

Journal ArticleDOI
TL;DR: It is shown that every state ω on a lattice effect algebra E induces a uniform topology on E, and if ω is subadditive this topology coincides with pseudometric topology induced by ω.
Abstract: We show that every state ω on a lattice effect algebra E induces a uniform topology on E If ω is subadditive this topology coincides with pseudometric topology induced by ω Further, we show relations between the interval and order topology on E and topologies induced by states

Journal ArticleDOI
TL;DR: This paper proposes a method to calculate the partial correlation of intuitionistic fuzzy sets by means of multivariate correlation model, and uses empirical logit transform for the degree of membership of intuitionism fuzzy set to fit into normal framework.
Abstract: In many applications, partial correlation for three or more intuitionistic fuzzy sets are very important, but Hung [12] do not discuss this problem. In this paper, we propose a method to calculate the partial correlation of intuitionistic fuzzy sets by means of multivariate correlation model. In order to fit into normal framework, we use empirical logit transform (see Agresti [1] and Johnson and Wichern [13]) for the degree of membership of intuitionistic fuzzy set to achive this.