scispace - formally typeset
Search or ask a question
Author

Dominik Ślęzak

Bio: Dominik Ślęzak is an academic researcher from University of Warsaw. The author has contributed to research in topics: Rough set & Fuzzy number. The author has an hindex of 14, co-authored 42 publications receiving 1047 citations. Previous affiliations of Dominik Ślęzak include Polish-Japanese Academy of Information Technology & University of Regina.

Papers
More filters
Book ChapterDOI
09 Nov 1999
TL;DR: NP-hardness of the problem of finding optimal approximate association rules is shown, which makes the results enabling the usage of rough sets algorithms to the search of association rules extremely important in view of applications.
Abstract: We consider approximate versions of fundamental notions of theories of rough sets and association rules. We analyze the complexity of searching for α-reducts, understood as subsets discerning “α-almost” objects from different decision classes, in decision tables. We present how optimal approximate association rules can be derived from data by using heuristics for searching for minimal α-reducts. NP-hardness of the problem of finding optimal approximate association rules is shown as well. It makes the results enabling the usage of rough sets algorithms to the search of association rules extremely important in view of applications.

143 citations

Journal Article
TL;DR: Arithmetic operations defined on ordered fuzzy numbers enable to avoid some drawbacks of the classical approach and leads to a novel concept of an ordered fuzzy number, represented by the ordered pair of real continuous functions.
Abstract: Fuzzy counterpart of real numbers is presented. Fuzzy membership functions, which satisfy conditions similar to the quasi-convexity are considered. An extra feature, called the orientation of the curve of fuzzy membership function, is introduced. It leads to a novel concept of an ordered fuzzy number, represented by the ordered pair of real continuous functions. Arithmetic operations defined on ordered fuzzy numbers enable to avoid some drawbacks of the classical approach.

140 citations

Book ChapterDOI
01 Jan 2005
TL;DR: The proposed Rough Bayesian model (RB) does not require information about the prior and posterior probabilities in case they are not provided in a confirmable way, and is related to the Bayes factor known from the Bayesian hypothesis testing methods.
Abstract: We present a novel approach to understanding the concepts of the theory of rough sets in terms of the inverse probabilities derivable from data. It is related to the Bayes factor known from the Bayesian hypothesis testing methods. The proposed Rough Bayesian model (RB) does not require information about the prior and posterior probabilities in case they are not provided in a confirmable way. We discuss RB with respect to its correspondence to the original Rough Set model (RS) introduced by Pawlak and Variable Precision Rough Set model (VPRS) introduced by Ziarko. We pay a special attention on RB’s capability to deal with multi-decision problems. We also propose a method for distributed data storage relevant to computational needs of our approach.

125 citations

Book ChapterDOI
01 Dec 2000
TL;DR: This work shows how to use reduct approximations to develop flexible tools for analysis of strongly inconsistent and/or noisy data tables and a special attention is paid to the notion of a rough membership decision reduct.
Abstract: Various aspects of reduct approximations are discussed. In particular, we show how to use them to develop flexible tools for analysis of strongly inconsistent and/or noisy data tables. A special attention is paid to the notion of a rough membership decision reduct — a feature subset (almost) preserving the frequency based information about conditions-→decision dependencies. Approximate criteria of preserving such a kind of information under attribute reduction are considered. These criteria are specified by using distances between frequency distributions and information measures related to different ways of interpreting rough membership based knowledge.

83 citations

Journal Article
TL;DR: This work considers the family of normalized decision functions acting over conditional frequency distributions computed from data tables and draws the connection between such functions and approaches to generating inexact decision rules for the new case classification.
Abstract: We consider the family of normalized decision functions acting over conditional frequency distributions computed from data tables. We draw the connection between such functions and approaches to generating inexact decision rules for the new case classification. We also introduce the family of normalized decision measures corresponding to particular decision functions. They enable us to express efficiency of particular strategies of reasoning with respect to a given data. We show the properties of approximate decision rules and decision reducts based on normalized decision functions and measures. As a result, we obtain an intuitive and flexible tool for extracting approximate classification models from data.

74 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The basic concepts of rough set theory are presented and some rough set-based research directions and applications are pointed out, indicating that the rough set approach is fundamentally important in artificial intelligence and cognitive sciences.

2,004 citations

Journal ArticleDOI
TL;DR: Some extensions of the rough set approach are presented and a challenge for the roughSet based research is outlined and it is outlined that the current rough set based research paradigms are unsustainable.

1,161 citations

Journal ArticleDOI
Yiyu Yao1
TL;DR: This paper provides an analysis of three-way decision rules in the classical rough set model and the decision-theoretic rough set models, enriched by ideas from Bayesian decision theory and hypothesis testing in statistics.

1,088 citations

Journal ArticleDOI
TL;DR: Methods based on the combination of rough sets and Boolean reasoning with applications in pattern recognition, machine learning, data mining and conflict analysis are discussed.

940 citations

Journal ArticleDOI
TL;DR: This paper reviews those techniques that preserve the underlying semantics of the data, using crisp and fuzzy rough set-based methodologies, and several approaches to feature selection based on rough set theory are experimentally compared.
Abstract: Semantics-preserving dimensionality reduction refers to the problem of selecting those input features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition, and signal processing. This has found successful application in tasks that involve data sets containing huge numbers of features (in the order of tens of thousands), which would be impossible to process further. Recent examples include text processing and Web content classification. One of the many successful applications of rough set theory has been to this feature selection area. This paper reviews those techniques that preserve the underlying semantics of the data, using crisp and fuzzy rough set-based methodologies. Several approaches to feature selection based on rough set theory are experimentally compared. Additionally, a new area in feature selection, feature grouping, is highlighted and a rough set-based feature grouping technique is detailed.

634 citations