scispace - formally typeset
Search or ask a question
Conference

Granular Computing 

About: Granular Computing is an academic conference. The conference publishes majorly in the area(s): Rough set & Fuzzy logic. Over the lifetime, 2407 publications have been published by the conference receiving 25967 citations.


Papers
More filters
Proceedings ArticleDOI
25 Jul 2005
TL;DR: The generalized theory of uncertainty (GTU), which is outlined in this paper, breaks with this tradition and views uncertainty in a broader perspective and represents information as what is called a generalized constraint.
Abstract: It is a deep-seated tradition in science to view uncertainty as a province of probability theory. The generalized theory of uncertainty (GTU), which is outlined in this paper, breaks with this tradition and views uncertainty in a broader perspective. Uncertainty is an attribute of information. A fundamental premise of GTU is that information, whatever its form, may be represented as what is called a generalized constraint. The concept of a generalized constraint is the centerpiece of GTU.

524 citations

Proceedings ArticleDOI
25 Jul 2005
TL;DR: Experiments on a real-world multi- label bioinformatic data show that ML-kNN is highly comparable to existing multi-label learning algorithms.
Abstract: In multi-label learning, each instance in the training set is associated with a set of labels, and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, a multi-label lazy learning approach named ML-kNN is presented, which is derived from the traditional k-nearest neighbor (kNN) algorithm. In detail, for each new instance, its k-nearest neighbors are firstly identified. After that, according to the label sets of these neighboring instances, maximum a posteriori (MAP) principle is utilized to determine the label set for the new instance. Experiments on a real-world multi-label bioinformatic data show that ML-kNN is highly comparable to existing multi-label learning algorithms.

416 citations

Proceedings ArticleDOI
25 Jul 2005
TL;DR: It is shown that soft sets are a class of special information systems and that partition-type soft sets and information systems have the same formal structures, and that fuzzysoft sets and fuzzy information systems are equivalent.
Abstract: This paper discusses the relationship between soft sets and information systems. It is shown that soft sets are a class of special information systems. After soft sets are extended to several classes of general cases, the more general results also show that partition-type soft sets and information systems have the same formal structures, and that fuzzy soft sets and fuzzy information systems are equivalent.

336 citations

Proceedings ArticleDOI
25 Jul 2005
TL;DR: It is argued that granular computing is more about a philosophical way of thinking and a practical methodology of problem solving and provides a systematic, natural way to analyze, understand, represent, and solve real world problems.
Abstract: Granular computing emerges as a new multi-disciplinary study and has received much attention in recent years. A conceptual framework is presented by extracting shared commonalities from many fields. The framework stresses multiple views and multiple levels of understanding in each view. It is argued that granular computing is more about a philosophical way of thinking and a practical methodology of problem solving. By effectively using levels of granularity, granular computing provides a systematic, natural way to analyze, understand, represent, and solve real world problems. With granular computing, one aims at structured thinking at the philosophical level, and structured problem solving at the practical level.

318 citations

Proceedings ArticleDOI
10 May 2006
TL;DR: This paper uses RIPPER as the underlying rule classifier and implements a combination of oversampling (both by replication and synthetic generation) and undersampling techniques and proposes a clustering based methodology for oversamplings by generating synthetic instances.
Abstract: An approach to combating network intrusion is the development of systems applying machine learning and data min- ing techniques. Many IDS (Intrusion Detection Systems) suffer from a high rate of false alarms and missed intrusions. We want to be able to improve the intrusion detection rate at a reduced false positive rate. The focus of this paper is rule-learning, using RIPPER, on highly imbalanced intrusion datasets with an objective to improve the true positive rate (intrusions) without significantly increasing the false positives. We use RIPPER as the underlying rule classifier. To counter imbalance in data, we implement a combination of oversampling (both by replication and synthetic generation) and undersampling techniques. We also propose a clustering based methodology for oversampling by generating synthetic instances. We evaluate our approaches on two intrusion datasets — destination and actual packets based — constructed from actual Notre Dame traffic, giving a flavor of real-world data with its idiosyncrasies. Using ROC analysis, we show that oversampling by synthetic generation of minority (intrusion) class outperforms oversampling by replication and RIPPER's loss ratio method. Additionally, we establish that our clustering based approach is more suitable for the detecting intrusions and is able to provide additional improvement over just synthetic generation of instances.

221 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
202310
202233
2021120
202040
201952
201829