scispace - formally typeset
Search or ask a question

Showing papers on "Rough set published in 2013"


Journal ArticleDOI
01 Oct 2013
TL;DR: The study introduces and discusses a principle of justifiable granularity, which supports a coherent way of designing information granules in presence of experimental evidence (either of numerical or granular character).
Abstract: The study introduces and discusses a principle of justifiable granularity, which supports a coherent way of designing information granules in presence of experimental evidence (either of numerical or granular character) The term ''justifiable'' pertains to the construction of the information granule, which is formed in such a way that it is (a) highly legitimate (justified) in light of the experimental evidence, and (b) specific enough meaning it comes with a well-articulated semantics (meaning) The design process associates with a well-defined optimization problem with the two requirements of experimental justification and specificity A series of experiments is provided as well as a number of constructs carried for various formalisms of information granules (intervals, fuzzy sets, rough sets, and shadowed sets) are discussed as well

225 citations


Journal ArticleDOI
TL;DR: A new definition of attribute reduct for decision-theoretic rough set models is provided and a heuristic approach, a genetic approach and a simulated annealing approach to the new problem are proposed.

210 citations


Journal ArticleDOI
TL;DR: Three types of covering based multigranulation rough sets are presented, in which set approximations are defined by different covering approximation operators, which will enrich the MGRS theory and enlarge its application scope.

186 citations


Journal ArticleDOI
TL;DR: This study provides a solution in the aspect of determining the value of loss function of DTRS and extends its range of applications with the use of particle swarm optimization.

171 citations


Journal ArticleDOI
TL;DR: A rule base classifier has been built that cover all the diseased rice plant images and provides superior result compare to traditional classifiers.

165 citations


Journal ArticleDOI
TL;DR: It is shown that test cost sensitive multigranulation rough set is a generalization of optimistic, pessimistic and β-multigranulations rough sets, and a backtracking algorithm is proposed for granular structure selection with minimal test cost.

125 citations


Journal ArticleDOI
TL;DR: It is proven that the set of all the lower (resp. upper) L-fuzzy approximation sets forms a complete lattice when the L-relation is reflexive.

119 citations


Journal ArticleDOI
TL;DR: A new approach is being introduced to study roughness through soft sets, where parametrized subsets of a universe set are basic building blocks for lower and upper approximations of a subset.
Abstract: Theories of soft sets and rough sets are two different approaches to vagueness. A possible fusion of rough sets and soft sets is proposed by F.Feng et al. They introduce the concept of soft rough sets, where parametrized subsets of a universe set are basic building blocks for lower and upper approximations of a subset. In the present paper, a new approach is being introduced to study roughness through soft sets. In this new technique of finding approximations of a set, flavour of both theories of soft sets and rough sets is retained. This new model may be called modified soft rough set. It is shown that in this new model, information granules are finer than soft rough sets. Some results which were not valid in soft rough sets can be proved in MSR-sets. Finally concept of approximations of an information system with respect to another information system is studied.

110 citations


Journal ArticleDOI
TL;DR: This paper study optimal scale selection in multi-scale decision tables from the perspective of granular computation, an attribute-value system in which each object under each attribute is represented by different scales at different levels of granulations having a granular information transformation from a finer to a coarser labelled value.

109 citations


Journal ArticleDOI
TL;DR: A general fuzzy multi-objective model with expected objectives and chance constraints (ECM) based on the Me is established and two approximation models are proposed by dividing the fuzzy feasible region using rough set theory.

107 citations


Journal ArticleDOI
TL;DR: A dimension incremental strategy for redcut computation that can find a new reduct in a much shorter time when an attribute set is added to a decision table, and the developed algorithm is effective and efficient.
Abstract: Many real data sets in databases may vary dynamically. With the rapid development of data processing tools, databases increase quickly not only in rows (objects) but also in columns (attributes) nowadays. This phenomena occurs in several fields including image processing, gene sequencing and risk prediction in management. Rough set theory has been conceived as a valid mathematical tool to analyze various types of data. A key problem in rough set theory is executing attribute reduction for a data set. This paper focuses on attribute reduction for data sets with dynamically-increasing attributes. Information entropy is a common measure of uncertainty and has been widely used to construct attribute reduction algorithms. Based on three representative entropies, this paper develops a dimension incremental strategy for redcut computation. When an attribute set is added to a decision table, the developed algorithm can find a new reduct in a much shorter time. Experiments on six data sets downloaded from UCI show that, compared with the traditional non-incremental reduction algorithm, the developed algorithm is effective and efficient.

Journal ArticleDOI
01 Jan 2013
TL;DR: An attribute reduction algorithm for data sets with dynamically varying data values based on three representative entropies that can find a new reduct in a much shorter time when a part of data in a given data set is replaced by some new data.
Abstract: Many real data sets in databases may vary dynamically. With such data sets, one has to run a knowledge acquisition algorithm repeatedly in order to acquire new knowledge. This is a very time-consuming process. To overcome this deficiency, several approaches have been developed to deal with dynamic databases. They mainly address knowledge updating from three aspects: the expansion of data, the increasing number of attributes and the variation of data values. This paper focuses on attribute reduction for data sets with dynamically varying data values. Information entropy is a common measure of uncertainty and has been widely used to construct attribute reduction algorithms. Based on three representative entropies, this paper develops an attribute reduction algorithm for data sets with dynamically varying data values. When a part of data in a given data set is replaced by some new data, compared with the classic reduction algorithms based on the three entropies, the developed algorithm can find a new reduct in a much shorter time. Experiments on six data sets downloaded from UCI show that the algorithm is effective and efficient.

Journal ArticleDOI
TL;DR: The principles of updating P-dominating sets and P-dominated sets when some attributes are added into or deleted from the attribute set P are discussed and incremental approaches and algorithms for updating approximations in DRSA are proposed.
Abstract: Dominance-based Rough Sets Approach (DRSA) is a generalized model of the classical Rough Sets Theory (RST) which may handle information with preference-ordered attribute domain. The attribute set in the information system may evolve over time. Approximations of DRSA used to induce decision rules need updating for knowledge discovery and other related tasks. We firstly introduce a kind of dominance matrix to calculate P-dominating sets and P-dominated sets in DRSA. Then we discuss the principles of updating P-dominating sets and P-dominated sets when some attributes are added into or deleted from the attribute set P. Furthermore, we propose incremental approaches and algorithms for updating approximations in DRSA. The proposed incremental approaches effectively reduce the computational time in comparison with the non-incremental approach are validated by experimental evaluations on different data sets from UCI.

Journal ArticleDOI
01 Jan 2013
TL;DR: A hybridization of fuzzy c-means (FCM) and rough set theory (RST) techniques is proposed as a new solution for supplier selection, evaluation and development problem and shows that the proposed method not only selects the best supplier(s), also clusters all of the vendors with respect to fuzzy similarity degrees.
Abstract: Supplier evaluation and selection process has a critical role and significant impact on purchasing management in supply chain. It is also a complex multiple criteria decision making problem which is affected by several conflicting factors. Due to multiple criteria effects the evaluation and selection process, deciding which criteria have the most critical roles in decision making is a very important step for supplier selection, evaluation and particularly development. With this study, a hybridization of fuzzy c-means (FCM) and rough set theory (RST) techniques is proposed as a new solution for supplier selection, evaluation and development problem. First the vendors are clustered with FCM algorithm then the formed clusters are represented by their prototypes that are used for labeling the clusters. RST is used at the next step of modeling where we discover the primary features in other words the core evaluation criteria of the suppliers and extract the decision rules for characterizing the clusters. The obtained results show that the proposed method not only selects the best supplier(s), also clusters all of the vendors with respect to fuzzy similarity degrees, decides the most critical criteria for supplier evaluation and extracts the decision rules about data.

Journal ArticleDOI
TL;DR: An efficient method is proposed to select initial prototypes of different gene clusters, which enables the proposed c-means algorithm to converge to an optimum or near optimum solutions and helps to discover coexpressed gene clusters.
Abstract: Gene expression data clustering is one of the important tasks of functional genomics as it provides a powerful tool for studying functional relationships of genes in a biological process. Identifying coexpressed groups of genes represents the basic challenge in gene clustering problem. In this regard, a gene clustering algorithm, termed as robust rough-fuzzy $(c)$-means, is proposed judiciously integrating the merits of rough sets and fuzzy sets. While the concept of lower and upper approximations of rough sets deals with uncertainty, vagueness, and incompleteness in cluster definition, the integration of probabilistic and possibilistic memberships of fuzzy sets enables efficient handling of overlapping partitions in noisy environment. The concept of possibilistic lower bound and probabilistic boundary of a cluster, introduced in robust rough-fuzzy $(c)$-means, enables efficient selection of gene clusters. An efficient method is proposed to select initial prototypes of different gene clusters, which enables the proposed $(c)$-means algorithm to converge to an optimum or near optimum solutions and helps to discover coexpressed gene clusters. The effectiveness of the algorithm, along with a comparison with other algorithms, is demonstrated both qualitatively and quantitatively on 14 yeast microarray data sets.

Journal ArticleDOI
TL;DR: Two incremental algorithms for updating the approximations in disjunctive/conjunctive set-valued information systems are proposed and results indicate the incremental approaches significantly outperform non-incremental approaches with a dramatic reduction in the computational speed.
Abstract: Incremental learning is an efficient technique for knowledge discovery in a dynamic database, which enables acquiring additional knowledge from new data without forgetting prior knowledge. Rough set theory has been successfully used in information systems for classification analysis. Set-valued information systems are generalized models of single-valued information systems, which can be classified into two categories: disjunctive and conjunctive. Approximations are fundamental concepts of rough set theory, which need to be updated incrementally while the object set varies over time in the set-valued information systems. In this paper, we analyze the updating mechanisms for computing approximations with the variation of the object set. Two incremental algorithms for updating the approximations in disjunctive/conjunctive set-valued information systems are proposed, respectively. Furthermore, extensive experiments are carried out on several data sets to verify the performance of the proposed algorithms. The results indicate the incremental approaches significantly outperform non-incremental approaches with a dramatic reduction in the computational speed.

Journal ArticleDOI
TL;DR: A new hybrid medical decision support system based on rough set and extreme learning machine (ELM) has been proposed for the diagnosis of hepatitis disease and it has been observed that RS-ELM model has been considerably successful compared to the other methods in the literature.

Journal ArticleDOI
TL;DR: The proposed rough set model is an effective means of extracting knowledge from dominance-based interval-valued intuitionistic fuzzy information systems and is applied to computer auditing risk assessment, decision-making problems in wealth management, and pattern classification.

Journal ArticleDOI
TL;DR: A new form of conditional entropy is introduced to measure the importance of attributes in incomplete decision systems to construct three attribute selection approaches, including an exhaustive search strategy approach, a greedy (heuristic)search strategy approach and a probabilistic search approach for incomplete decision system.
Abstract: Shannon's entropy and its variants have been applied to measure uncertainty in rough set theory from the viewpoint of information theory. However, few studies have been done on attribute selection in incomplete decision systems based on information-theoretical measurement of attribute importance. In this paper, we introduce a new form of conditional entropy to measure the importance of attributes in incomplete decision systems. Based on the introduced conditional entropy, we construct three attribute selection approaches, including an exhaustive search strategy approach, a greedy (heuristic) search strategy approach and a probabilistic search approach for incomplete decision systems. To test the effectiveness of these methods, experiments on several real-life incomplete data sets are conducted. The results indicate that two of these methods are effective for attribute selection in incomplete decision system.

Journal ArticleDOI
TL;DR: A variable-precision-dominance-based rough set approach based on the substitution of indiscernibility relation by the α -dominance relation is proposed and the knowledge discovery framework is formulated for interval-valued information systems.

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter presents a rough set data analysis software jMAF that employs java Rough Set (jRS) library in which are implemented data analysis methods provided by the (variable consistency) Dominance-based Rough Set Approach (DRSA).
Abstract: We present a rough set data analysis software jMAF. It employs java Rough Set (jRS) library in which are implemented data analysis methods provided by the (variable consistency) Dominance-based Rough Set Approach (DRSA). The chapter also provides some basics of the DRSA and of its variable consistency extension.

Journal ArticleDOI
TL;DR: Vibration signals are used for fault diagnosis of centrifugal pumps using wavelet analysis and the results are presented in the form of confusion matrix which shows the classification capability of wavelet features with rough set and fuzzy logic for Fault diagnosis of monoblock centrifugal pump.

Journal ArticleDOI
01 Jul 2013
TL;DR: From the properties, it can be found that rough set model based on a single tolerance relation is a special instance of MGTRS, which is extended to two types of multi-granulation tolerance rough set models (MGTRS).
Abstract: The original rough set model is primarily concerned with the approximations of sets described by a single equivalence relation on the universe. Some further investigations generalize the classical rough set model to rough set model based on a tolerance relation. From the granular computing point of view, the classical rough set theory is based on a single granulation. For some complicated issues, the classical rough set model was extended to multi-granulation rough set model (MGRS). This paper extends the single-granulation tolerance rough set model (SGTRS) to two types of multi-granulation tolerance rough set models (MGTRS). Some important properties of the two types of MGTRS are investigated. From the properties, it can be found that rough set model based on a single tolerance relation is a special instance of MGTRS. Moreover, the relationship and difference among SGTRS, the first type of MGTRS and the second type of MGTRS are discussed. Furthermore, several important measures are presented in two types of MGTRS, such as rough measure and quality of approximation. Several examples are considered to illustrate the two types of multi-granulation tolerance rough set models. The results from this research are both theoretically and practically meaningful for data reduction.

Journal ArticleDOI
TL;DR: The principles of incrementally updating P‐dominating sets and P‐dominated sets are discussed and an incremental approach for updating approximations of DRSA is proposed and shown to outperforms the original nonincremental approach.
Abstract: Dominance-based rough sets approach (DRSA) is an effective tool to deal with information with preference-ordered attribute domains and decision classes. Any information system may evolve when new objects enter into or old objects get out. Approximations of DRSA need update for decision analysis or other relative tasks. Incremental updating is a feasible and effective technique to update approximations. The purpose of this paper is to present an incremental approach for updating approximations of DRSA. The approach is applicable to dynamic information systems when the set of objects varies over time. In this paper, we discuss the principles of incrementally updating P -dominating sets and P -dominated sets and propose an incremental approach for updating approximations of DRSA. A numerical example is given to illustrate the incremental approach. The experimental evaluations on data sets from UCI show that the incremental approach outperforms the original nonincremental one. C � 2013 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This paper will try to show that FCA actually provides support for processing large dynamical complex data augmented with additional knowledge.
Abstract: During the last three decades, formal concept analysis (FCA) became a well-known formalism in data analysis and knowledge discovery because of its usefulness in important domains of knowledge discovery in databases (KDD) such as ontology engineering, association rule mining, machine learning, as well as relation to other established theories for representing knowledge processing, like description logics, conceptual graphs, and rough sets. In early days, FCA was sometimes misconceived as a static crisp hardly scalable formalism for binary data tables. In this paper, we will try to show that FCA actually provides support for processing large dynamical complex (may be uncertain) data augmented with additional knowledge. © 2013 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A fuzzy relation is defined and a fuzzy rough set model is constructed for set-valued information systems based on a tolerance relation that examines whether two set values have a non-empty intersection.

Journal ArticleDOI
Jianhua Dai1
TL;DR: It is suggested that the proposed tolerance-fuzzy rough set model provide an optional approach to incomplete numerical data.

Journal ArticleDOI
TL;DR: In this paper, several unsupervised FS approaches are presented which are based on fuzzy-rough sets which require no thresholding information, are domain-independent, and can operate on real-valued data without the need for discretisation.

Journal ArticleDOI
TL;DR: A novel multi-label classification framework MLNRS based on neighborhood rough sets for automatic image annotation which considers the uncertainty of the mapping from visual feature space to semantic concepts space.

Journal ArticleDOI
TL;DR: This study hybridizes a novel genetic algorithm with the rough set theory, called the rough penalty genetic algorithm (RPGA), with the aim to effectively achieve robust solutions and resolve constrained optimization problems.