Book•

# Fuzzy sets

01 Aug 1996-pp 394

TL;DR: A separation theorem for convex fuzzy sets is proved without requiring that the fuzzy sets be disjoint.

Abstract: A fuzzy set is a class of objects with a continuum of grades of membership. Such a set is characterized by a membership (characteristic) function which assigns to each object a grade of membership ranging between zero and one. The notions of inclusion, union, intersection, complement, relation, convexity, etc., are extended to such sets, and various properties of these notions in the context of fuzzy sets are established. In particular, a separation theorem for convex fuzzy sets is proved without requiring that the fuzzy sets be disjoint.

##### Citations

More filters

•

08 Sep 2000TL;DR: This book presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects, and provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data.

Abstract: The increasing volume of data in modern business and science calls for more complex and sophisticated tools. Although advances in data mining technology have made extensive data collection much easier, it's still always evolving and there is a constant need for new techniques and tools that can help us transform this data into useful information and knowledge. Since the previous edition's publication, great advances have been made in the field of data mining. Not only does the third of edition of Data Mining: Concepts and Techniques continue the tradition of equipping you with an understanding and application of the theory and practice of discovering patterns hidden in large data sets, it also focuses on new, important topics in the field: data warehouses and data cube technology, mining stream, mining social networks, and mining spatial, multimedia and other complex data. Each chapter is a stand-alone guide to a critical topic, presenting proven algorithms and sound implementations ready to be used directly or with strategic modification against live data. This is the resource you need if you want to apply today's most powerful data mining techniques to meet real business challenges. * Presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects. * Addresses advanced topics such as mining object-relational databases, spatial databases, multimedia databases, time-series databases, text databases, the World Wide Web, and applications in several fields. *Provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data

23,600 citations

••

TL;DR: An overview of pattern clustering methods from a statistical pattern recognition perspective is presented, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners.

Abstract: Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overview of pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval.

14,054 citations

### Cites methods from "Fuzzy sets"

...Fuzzy clustering extends this notion to associate each pattern with every cluster using a membership function [Zadeh 1965]....

[...]

••

TL;DR: A review of predictive habitat distribution modeling is presented, which shows that a wide array of models has been developed to cover aspects as diverse as biogeography, conservation biology, climate change research, and habitat or species management.

Abstract: With the rise of new powerful statistical techniques and GIS tools, the development of predictive habitat distribution models has rapidly increased in ecology. Such models are static and probabilistic in nature, since they statistically relate the geographical distribution of species or communities to their present environment. A wide array of models has been developed to cover aspects as diverse as biogeography, conservation biology, climate change research, and habitat or species management. In this paper, we present a review of predictive habitat distribution modeling. The variety of statistical techniques used is growing. Ordinary multiple regression and its generalized form (GLM) are very popular and are often used for modeling species distributions. Other methods include neural networks, ordination and classification methods, Bayesian models, locally weighted approaches (e.g. GAM), environmental envelopes or even combinations of these models. The selection of an appropriate method should not depend solely on statistical considerations. Some models are better suited to reflect theoretical findings on the shape and nature of the species’ response (or realized niche). Conceptual considerations include e.g. the trade-off between optimizing accuracy versus optimizing generality. In the field of static distribution modeling, the latter is mostly related to selecting appropriate predictor variables and to designing an appropriate procedure for model selection. New methods, including threshold-independent measures (e.g. receiver operating characteristic (ROC)-plots) and resampling techniques (e.g. bootstrap, cross-validation) have been introduced in ecology for testing the accuracy of predictive models. The choice of an evaluation measure should be driven primarily by the goals of the study. This may possibly lead to the attribution of different weights to the various types of prediction errors (e.g. omission, commission or confusion). Testing the model in a wider range of situations (in space and time) will permit one to define the range of applications for which the model predictions are suitable. In turn, the qualification of the model depends primarily on the goals of the study that define the qualification criteria and on the usability of the model, rather than on statistics alone. © 2000 Elsevier Science B.V. All rights reserved.

6,748 citations

••

TL;DR: Clustering algorithms for data sets appearing in statistics, computer science, and machine learning are surveyed, and their applications in some benchmark data sets, the traveling salesman problem, and bioinformatics, a new field attracting intensive efforts are illustrated.

Abstract: Data analysis plays an indispensable role for understanding various phenomena. Cluster analysis, primitive exploration with little or no prior knowledge, consists of research developed across a wide variety of communities. The diversity, on one hand, equips us with many tools. On the other hand, the profusion of options causes confusion. We survey clustering algorithms for data sets appearing in statistics, computer science, and machine learning, and illustrate their applications in some benchmark data sets, the traveling salesman problem, and bioinformatics, a new field attracting intensive efforts. Several tightly related topics, proximity measure, and cluster validation, are also discussed.

5,744 citations

••

TL;DR: The rating of each alternative and the weight of each criterion are described by linguistic terms which can be expressed in triangular fuzzy numbers and a vertex method is proposed to calculate the distance between two triangular fuzzyNumbers.

Abstract: The aim of this paper is to extend the TOPSIS to the fuzzy environment. Owing to vague concepts frequently represented in decision data, the crisp value are inadequate to model real-life situations. In this paper, the rating of each alternative and the weight of each criterion are described by linguistic terms which can be expressed in triangular fuzzy numbers. Then, a vertex method is proposed to calculate the distance between two triangular fuzzy numbers. According to the concept of the TOPSIS, a closeness coefficient is defined to determine the ranking order of all alternatives by calculating the distances to both the fuzzy positive-ideal solution (FPIS) and fuzzy negative-ideal solution (FNIS) simultaneously. Finally, an example is shown to highlight the procedure of the proposed method at the end of this paper.

3,109 citations

### Cites background or methods from "Fuzzy sets"

...In the following, we brie y review some basic definitions of fuzzy sets from [2, 11, 12, 14–16]....

[...]

...The function value Ã (x) is termed the grade of membership of x in Ã [14]....

[...]