scispace - formally typeset
Search or ask a question

Showing papers on "Rough set published in 2005"


Journal ArticleDOI
TL;DR: This paper proposes a reasonable definition of parameterization reduction of soft sets and compares it with the concept of attributes reduction in rough sets theory and improves the application of a soft set in a decision making problem found in [1].
Abstract: In this paper, we focus our discussion on the parameterization reduction of soft sets and its applications. First we point out that the results of soft set reductions offered in [1] are incorrect. We also observe that the algorithms used to first compute the reduct-soft-set and then to compute the choice value to select the optimal objects for the decision problems in [1] are not reasonable and we illustrate this with an example. Finally, we propose a reasonable definition of parameterization reduction of soft sets and compare it with the concept of attributes reduction in rough sets theory. By using this new definition of parameterization reduction, we improve the application of a soft set in a decision making problem found in [1].

632 citations


BookDOI
TL;DR: The RSFDGrC 2013 was the 14th International Conference on Distributed Sensor Networks for Computer Science (RSFDG-2013) as mentioned in this paper, held in Halifax, NS, Canada, October 11-14, 2013.
Abstract: 14th International Conference, RSFDGrC 2013, Halifax, NS, Canada, October 11-14, 2013. Proceedings - Part of the Lecture Notes in Computer Science book series

535 citations


Journal ArticleDOI
TL;DR: In this paper, some definitions of upper and lower approximation operators of fuzzy sets by means of arbitrary fuzzy relations are proposed and a special lower approximation operator is applied to a fuzzy reasoning system, which coincides with the Mamdani algorithm.
Abstract: Rough sets and fuzzy sets have been proved to be powerful mathematical tools to deal with uncertainty, it soon raises a natural question of whether it is possible to connect rough sets and fuzzy sets. The existing generalizations of fuzzy rough sets are all based on special fuzzy relations (fuzzy similarity relations, T-similarity relations), it is advantageous to generalize the fuzzy rough sets by means of arbitrary fuzzy relations and present a general framework for the study of fuzzy rough sets by using both constructive and axiomatic approaches. In this paper, from the viewpoint of constructive approach, we first propose some definitions of upper and lower approximation operators of fuzzy sets by means of arbitrary fuzzy relations and study the relations among them, the connections between special fuzzy relations and upper and lower approximation operators of fuzzy sets are also examined. In axiomatic approach, we characterize different classes of generalized upper and lower approximation operators of fuzzy sets by different sets of axioms. The lattice and topological structures of fuzzy rough sets are also proposed. In order to demonstrate that our proposed generalization of fuzzy rough sets have wider range of applications than the existing fuzzy rough sets, a special lower approximation operator is applied to a fuzzy reasoning system, which coincides with the Mamdani algorithm.

420 citations


Journal ArticleDOI
TL;DR: A non-parametric modification of the VPRS model called the Bayesian Rough Set (BRS) model is presented, where the set approximations are defined by using the prior probability as a reference.

382 citations


Proceedings ArticleDOI
25 Jul 2005
TL;DR: It is shown that soft sets are a class of special information systems and that partition-type soft sets and information systems have the same formal structures, and that fuzzysoft sets and fuzzy information systems are equivalent.
Abstract: This paper discusses the relationship between soft sets and information systems. It is shown that soft sets are a class of special information systems. After soft sets are extended to several classes of general cases, the more general results also show that partition-type soft sets and information systems have the same formal structures, and that fuzzy soft sets and fuzzy information systems are equivalent.

336 citations


Journal ArticleDOI
TL;DR: In this paper, genetic programming (GP) is used to build credit scoring models and it is concluded that GP can provide better performance than other models.
Abstract: Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI) Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed to significantly improving the accuracy of the credit scoring mode In this paper, genetic programming (GP) is used to build credit scoring models Two numerical examples will be employed here to compare the error rate to other credit scoring models including the ANN, decision trees, rough sets, and logistic regression On the basis of the results, we can conclude that GP can provide better performance than other models

332 citations


Journal ArticleDOI
TL;DR: A general framework for the study of (I,T)-fuzzy rough approximation operators within which both constructive and axiomatic approaches are used, and an operator-oriented characterization of rough sets is proposed.

229 citations


Book ChapterDOI
TL;DR: This article gives an overview of the Rough Set Exploration System (RSES), a freely available software system toolset for data exploration, classification support and knowledge discovery.
Abstract: This article gives an overview of the Rough Set Exploration System (RSES). RSES is a freely available software system toolset for data exploration, classification support and knowledge discovery. The main functionalities of this software system are presented along with a brief explanation of the algorithmic methods used by RSES. Many of the RSES methods have originated from rough set theory introduced by Zdzislaw Pawlak during the early 1980s.

210 citations


Journal ArticleDOI
TL;DR: It is proved that both of belief reduct and plausibility reduct are equivalent to classical reduct in (random) information systems.

210 citations


Journal ArticleDOI
TL;DR: It is shown that the fuzzy-rough set attribute reduction algorithm is not convergent on many real datasets due to its poorly designed termination criteria; and the computational complexity of the algorithm increases exponentially with increase in the number of input variables and in multiplication with the size of data patterns.

208 citations


Journal ArticleDOI
TL;DR: A new feature selection mechanism based on ant colony optimization is proposed in an attempt to combat the problem of finding optimal feature subsets in the fuzzy-rough data reduction process.

Journal ArticleDOI
TL;DR: Methods of selecting the appropriate granule size and efficient computation of rough entropy are described, which results in minimization of roughness in both object and background regions; thereby determining the threshold of partitioning.

Journal ArticleDOI
TL;DR: This article proposes a reduction of knowledge that eliminates only that information that is not essential from the point of view of the ordering of objects or decision rules from an incomplete ordered decision table.
Abstract: Rough sets theory has proved to be a useful mathematical tool for classification and prediction. However, as many real-world problems deal with ordering objects instead of classifying objects, one of the extensions of the classical rough sets approach is the dominance-based rough sets approach, which is mainly based on substitution of the indiscernibility relation by a dominance relation. In this article, we present a dominance-based rough sets approach to reasoning in incomplete ordered information systems. The approach shows how to find decision rules directly from an incomplete ordered decision table. We propose a reduction of knowledge that eliminates only that information that is not essential from the point of view of the ordering of objects or decision rules. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 13–27, 2005.

Book ChapterDOI
TL;DR: This paper considers L–fuzzy rough sets as a further generalization of the notion of rough sets, and takes a residuated lattice L as a basic structure.
Abstract: Rough sets were developed by Pawlak as a formal tool for representing and processing information in data tables. Fuzzy generalizations of rough sets were introduced by Dubois and Prade. In this paper, we consider L–fuzzy rough sets as a further generalization of the notion of rough sets. Specifically, we take a residuated lattice L as a basic structure. L–fuzzy rough sets are defined using the product operator and its residuum provided by the residuated lattice L. Depending on classes of binary fuzzy relations, we define several classes of L–fuzzy rough sets and investigate properties of these classes.

Journal ArticleDOI
TL;DR: A new algorithm, named the extended Chi2 algorithm, is proposed, which possesses a better performance than the original and modified Chi2 algorithms and ignores the effect of variance in the two merged intervals.
Abstract: The variable precision rough sets (VPRS) model is a powerful tool for data mining, as it has been widely applied to acquire knowledge. Despite its diverse applications in many domains, the VPRS model unfortunately cannot be applied to real-world classification tasks involving continuous attributes. This requires a discretization method to preprocess the data. Discretization is an effective technique to deal with continuous attributes for data mining, especially for the classification problem. The modified Chi2 algorithm is one of the modifications to the Chi2 algorithm, replacing the inconsistency check in the Chi2 algorithm by using the quality of approximation, coined from the rough sets theory (RST), in which it takes into account the effect of degrees of freedom. However, the classification with a controlled degree of uncertainty, or a misclassification error, is outside the realm of RST. This algorithm also ignores the effect of variance in the two merged intervals. In this study, we propose a new algorithm, named the extended Chi2 algorithm, to overcome these two drawbacks. By running the software of See5, our proposed algorithm possesses a better performance than the original and modified Chi2 algorithms.

Book ChapterDOI
01 Jan 2005
TL;DR: The goal of the chapter is to present a knowledge discovery paradigm for multi-attribute and multicriteria decision making, which is based upon the concept of rough sets, in order to find concise classification patterns that agree with situations that are described by the data.
Abstract: In this chapter, we are concerned with discovering knowledge from data. The aim is to find concise classification patterns that agree with situations that are described by the data. Such patterns are useful for explanation of the data and for the prediction of future situations. They are particularly useful in such decision problems as technical diagnostics, performance evaluation and risk assessment. The situations are described by a set of attributes, which we might also call properties, features, characteristics, etc. Such attributes may be concerned with either the input or output of a situation. These situations may refer to states, examples, etc. Within this chapter, we will refer to them as objects. The goal of the chapter is to present a knowledge discovery paradigm for multi-attribute and multicriteria decision making, which is based upon the concept of rough sets. Rough set theory was introduced by (Pawlak 1982, Pawlak 1991). Since then, it has often proved to be an excellent mathematical tool for the analysis of a vague description of objects. The adjective vague (referring to the quality of information) is concerned with inconsistency or ambiguity. The rough set philosophy is based on the assumption that with every object of the universe U there is associated a certain amount of information (data, knowledge). This information can be expressed by means of a number of attributes. The attributes describe the object. Objects which have the same description are said to be indiscernible (similar) with respect to the available information.

Book ChapterDOI
TL;DR: This work discusses a possible algebraization of the concrete algebra of the power set of X through quasi BZ lattices that enables us to define two rough approximations based on a similarity and on a preclusive relation.
Abstract: Using as example an incomplete information system with support a set of objects X, we discuss a possible algebraization of the concrete algebra of the power set of X through quasi BZ lattices. This structure enables us to define two rough approximations based on a similarity and on a preclusive relation, with the second one always better that the former. Then, we turn our attention to Pawlak rough sets and consider some of their possible algebraic structures. Finally, we will see that also Fuzzy Sets are a model of the same algebras. Particular attention is given to HW algebra which is a strong and rich structure able to characterize both rough sets and fuzzy sets.

Book ChapterDOI
01 Jan 2005
TL;DR: The proposed Rough Bayesian model (RB) does not require information about the prior and posterior probabilities in case they are not provided in a confirmable way, and is related to the Bayes factor known from the Bayesian hypothesis testing methods.
Abstract: We present a novel approach to understanding the concepts of the theory of rough sets in terms of the inverse probabilities derivable from data. It is related to the Bayes factor known from the Bayesian hypothesis testing methods. The proposed Rough Bayesian model (RB) does not require information about the prior and posterior probabilities in case they are not provided in a confirmable way. We discuss RB with respect to its correspondence to the original Rough Set model (RS) introduced by Pawlak and Variable Precision Rough Set model (VPRS) introduced by Ziarko. We pay a special attention on RB’s capability to deal with multi-decision problems. We also propose a method for distributed data storage relevant to computational needs of our approach.

Journal ArticleDOI
TL;DR: It is proved that the measure of fuzziness of a partition-based fuzzy rough set, FR(A), is equal to zero if and only if the set A is crisp and definable.
Abstract: This paper extends Pawlak's rough set onto the basis of a fuzzy partition of the universe of discourse. Some basic properties of partition-based fuzzy approximation operators are examined. To measure uncertainty in generalized fuzzy rough sets, a new notion of entropy of a fuzzy set is introduced. The notion is demonstrated to be adequate for measuring the fuzziness of a fuzzy event. The entropy of a fuzzy partition and conditional entropy are also proposed. These kinds of entropy satisfy some basic properties similar to those of Shannon's entropy. It is proved that the measure of fuzziness of a partition-based fuzzy rough set, FR(A), is equal to zero if and only if the set A is crisp and definable.

Journal ArticleDOI
TL;DR: A novel rough setbased pseudo outer-product (RSPOP) algorithm that integrates the sound concept of knowledge reduction from rough set theory with the POP algorithm and improves the interpretability of neuro-fuzzy systems by identifying significantly fewer fuzzy rules.
Abstract: System modeling with neuro-fuzzy systems involves two contradictory requirements: interpretability verses accuracy. The pseudo outer-product (POP) rule identification algorithm used in the family of pseudo outer-product-based fuzzy neural networks (POPFNN) suffered from an exponential increase in the number of identified fuzzy rules and computational complexity arising from high-dimensional data. This decreases the interpretability of the POPFNN in linguistic fuzzy modeling. This article proposes a novel rough set–based pseudo outer-product (RSPOP) algorithm that integrates the sound concept of knowledge reduction from rough set theory with the POP algorithm. The proposed algorithm not only performs feature selection through the reduction of attributes but also extends the reduction to rules without redundant attributes. As many possible reducts exist in a given rule set, an objective measure is developed for POPFNN to correctly identify the reducts that improve the inferred consequence. Experimental results are presented using published data sets and real-world application involving highway traffic flow prediction to evaluate the effectiveness of using the proposed algorithm to identify fuzzy rules in the POPFNN using compositional rule of inference and singleton fuzzifier (POPFNN-CRI(S)) architecture. Results showed that the proposed rough set–based pseudo outer-product algorithm reduces computational complexity, improves the interpretability of neuro-fuzzy systems by identifying significantly fewer fuzzy rules, and improves the accuracy of the POPFNN.

Journal Article
TL;DR: A comparative study on the quantitative relationship between some basic concepts of rough set theory like attribute reduction, attribute significance and core defined from these two viewpoints shows that the relationship between these conceptions from the two viewpoints is rather an inclusion than an equivalence.
Abstract: Attribute reduction is an important issue in rough set theory and has already been studied from the algebra viewpoint and information viewpoint of rough set theory respectively. However, the concepts of attribute reduction based on these two different viewpoints are not equivalent to each other. In this paper, we make a comparative study on the quantitative relationship between some basic concepts of rough set theory like attribute reduction, attribute significance and core defined from these two viewpoints. The results show that the relationship between these conceptions from the two viewpoints is rather an inclusion than an equivalence due to the fact that the rough set theory discussed from the information point of view restricts attributes and decision tables more specifically than it does when considered from the algebra point of view. The identity of the two viewpoints will hold in consistent information decision tables only. That is, the algebra viewpoint and information viewpoint are equivalent for a consistent decision table, while different for an inconsistent decision table. The results are significant for the design and development of methods for information reduction.

Book ChapterDOI
31 Aug 2005
TL;DR: An incremental attribute reduction algorithm is proposed that when new objects are added into a decision information system, a new attribute reduction can be got by this method quickly.
Abstract: In the research of knowledge acquisition based on rough sets theory, attribute reduction is a key problem. Many researchers proposed some algorithms for attribute reduction. Unfortunately, most of them are designed for static data processing. However, many real data are generated dynamically. In this paper, an incremental attribute reduction algorithm is proposed. When new objects are added into a decision information system, a new attribute reduction can be got by this method quickly.

Book ChapterDOI
TL;DR: A collection of mathematical results on decision trees in areas of rough set theory and decision tree theory applications such as discrete optimization, analysis of acyclic programs, pattern recognition, fault diagnosis and probabilistic reasoning are contained.
Abstract: The research monograph is devoted to the study of bounds on time complexity in the worst case of decision trees and algorithms for decision tree construction. The monograph is organized in four parts. In the first part (Sects. 1 and 2) results of the monograph are discussed in context of rough set theory and decision tree theory. In the second part (Sect. 3) some tools for decision tree investigation based on the notion of decision table are described. In the third part (Sects. 4–6) general results about time complexity of decision trees over arbitrary (finite and infinite) information systems are considered. The fourth part (Sects. 7–11) contains a collection of mathematical results on decision trees in areas of rough set theory and decision tree theory applications such as discrete optimization, analysis of acyclic programs, pattern recognition, fault diagnosis and probabilistic reasoning.

Journal ArticleDOI
TL;DR: It is established that the proposed approach identifies various patterns in the sense of fuzzy-roughness, in addition to providing deeper insight into various concepts of fuzzier-rough sets.

Book ChapterDOI
31 Aug 2005
TL;DR: The article introduces the basic ideas and investigates the probabilistic version of rough set theory, which relies on both classification knowledge and Probabilistic knowledge in analysis of rules and attributes and has the monotonicity property.
Abstract: The article introduces the basic ideas and investigates the probabilistic version of rough set theory. It relies on both classification knowledge and probabilistic knowledge in analysis of rules and attributes. One-way and two-way inter-set dependency measures are proposed and adopted to probabilistic rule evaluation. A probabilistic dependency measure for attributes is also proposed and demonstrated to have the monotonicity property. This property makes it possible for the measure to be used to optimize and evaluate attribute based-representation through computation of attribute reduct, core and significance factors.

Journal ArticleDOI
TL;DR: A generalized model of fuzzy rough sets based on general fuzzy relations are studied, properties and algebraic characterization of the model are revealed, and relationships between this model and related models are also discussed.
Abstract: The consideration of approximation problem of fuzzy sets in fuzzy information systems results in theory of fuzzy rough sets. This paper focuses on models of generalized fuzzy rough sets, a generalized model of fuzzy rough sets based on general fuzzy relations are studied, properties and algebraic characterization of the model are revealed, and relationships between this model and related models are also discussed.


Book ChapterDOI
31 Aug 2005
TL;DR: A generalization of the original idea of rough sets and variable precision rough sets is introduced, based on the concept of absolute and relative rough membership, aimed at modeling data relationships expressed in terms of frequency distribution.
Abstract: A generalization of the original idea of rough sets and variable precision rough sets is introduced. This generalization is based on the concept of absolute and relative rough membership. Similarly to variable precision rough set model, the generalization called parameterized rough set model, is aimed at modeling data relationships expressed in terms of frequency distribution rather than in terms of a full inclusion relation used in the classical rough set approach. However, differently from variable precision rough set model, one or more parameters modeling the degree to which the condition attribute values confirm the decision attribute value, are considered. The properties of this extended model are investigated and compared to the classical rough set model and the variable precision rough set model.


Journal ArticleDOI
TL;DR: The paper compares conventional and non-conventional clustering techniques, as well as temporal andnon-temporal analysis of customer loyalty, and the interval set clustering is shown to provide an interesting dimension to a temporal analysis.