scispace - formally typeset
Search or ask a question

Showing papers by "James C. Bezdek published in 1997"


Proceedings ArticleDOI
01 Jul 1997
TL;DR: F fuzzy-possibilistic c-means is proposed, and it is shown that FPCM solves the noise sensitivity defect of FCM, and also overcomes the coincident clusters problem of PCM.
Abstract: We justify the need for computing both membership and typicality values when clustering unlabeled data. Then we propose a new model called fuzzy-possibilistic c-means (FPCM). Unlike the fuzzy and possibilistic c-means (FCM/PCM) models, FPCM simultaneously produces both memberships and possibilities, along with the usual point prototypes or cluster centers for each cluster We show that FPCM solves the noise sensitivity defect of FCM, and also overcomes the coincident clusters problem of PCM. Then we derive first order necessary conditions for extrema of the PFCM objective function, and use them as the basis for a standard alternating optimization approach to finding local minima. Three numerical examples are given that compare FCM to FPCM. Our calculations show that FPCM compares favorably to FCM.

344 citations


Journal ArticleDOI
TL;DR: It is shown that the fuzzy c-means and fuzzy learning vector quantization algorithms are related to the proposed algorithms if the learning rate at each iteration is selected to satisfy a certain condition.
Abstract: Derives an interpretation for a family of competitive learning algorithms and investigates their relationship to fuzzy c-means and fuzzy learning vector quantization. These algorithms map a set of feature vectors into a set of prototypes associated with a competitive network that performs unsupervised learning. Derivation of the new algorithms is accomplished by minimizing an average generalized distance between the feature vectors and prototypes using gradient descent. A close relationship between the resulting algorithms and fuzzy c-means is revealed by investigating the functionals involved. It is also shown that the fuzzy c-means and fuzzy learning vector quantization algorithms are related to the proposed algorithms if the learning rate at each iteration is selected to satisfy a certain condition.

155 citations


Journal ArticleDOI
16 Dec 1997
TL;DR: Viewing mixture decomposition as probabilistic clustering as opposed to parametric estimation enables both fuzzy and crisp measures of cluster validity for this problem, and uses the expectation-maximization algorithm to find clusters in the data.
Abstract: We study indices for choosing the correct number of components in a mixture of normal distributions Previous studies have been confined to indices based wholly on probabilistic models Viewing mixture decomposition as probabilistic clustering (where the emphasis is on partitioning for geometric substructure) as opposed to parametric estimation enables us to introduce both fuzzy and crisp measures of cluster validity for this problem We presume the underlying samples to be unlabeled, and use the expectation-maximization (EM) algorithm to find clusters in the data We test 16 probabilistic, 3 fuzzy and 4 crisp indices on 12 data sets that are samples from bivariate normal mixtures having either 3 or 6 components Over three run averages based on different initializations of EM, 10 of the 23 indices tested for choosing the right number of mixture components were correct in at least 9 of the 12 trials Among these were the fuzzy index of Xie-Beni, the crisp Davies-Bouldin index, and two crisp indices that are recent generalizations of Dunn’s index

97 citations


Journal ArticleDOI
TL;DR: This survey is divided into methods based on supervised and unsupervised learning (that is, on whether there are or are not labelled data available for supervising the computations), and is organized first and foremost by groups that are active in this area.
Abstract: This paper updates several recent surveys on the use of fuzzy models for segmentation and edge detection in medical image data. Our survey is divided into methods based on supervised and unsupervised learning (that is, on whether there are or are not labelled data available for supervising the computations), and is organized first and foremost by groups (that we know of) that are active in this area. Our review is aimed more towards `who is doing it' rather than `how good it is'. This is partially dictated by the fact that direct comparisons of supervised and unsupervised methods is somewhat akin to comparing apples and oranges. There is a further subdivision into methods for two-and three-dimensional data and/or problems. We do not cover methods based on neural-like networks or fuzzy reasoning systems. These topics are covered in a recently published companion survey by Keller et al.

65 citations


01 Jan 1997
TL;DR: In this paper, the verification of partitions produced by the fuzzy c-means (FCM) clustering algorithm was discussed, and an errata report the correct values, and notes that the conclusions drawn in (1) based on these simulations remain unchanged.
Abstract: Validation of partitions produced by the fuzzy c-means (FCM) clustering algorithm was discussed by Pal and Bezdek (1) . Two tables in (1) contain erroneous values. This errata reports the correct values, and notes that the conclusions drawn in (1), based on these simulations, remain unchanged.

44 citations


Proceedings ArticleDOI
09 Jun 1997
TL;DR: This article surveys the use of clustering for identification of various parameters of fuzzy systems and discusses the proper domain for clustering, the clustering algorithm used, validation of clusters, and system validation.
Abstract: This article surveys the use of clustering for identification of various parameters of fuzzy systems. Issues discussed include the proper domain for clustering, the clustering algorithm used, validation of clusters, and system validation.

35 citations


01 Jan 1997
TL;DR: A Fuzzy Generalized Nearest Prototype Classiier (FGNPC), which contains as special cases the 1-nearest neighbor rule, the minimum-distance classiier, and some types of radial-basis function networks and fuzzy if-then systems.
Abstract: We propose a Fuzzy Generalized Nearest Prototype Classiier (FGNPC). The classiication decision is crisp and is based on aggregation of similarities between the unlabeled object x and n p prototypes fv i g with \soft" labels. FGNPC contains as special cases the 1-nearest neighbor rule, the minimum-distance classiier, and some types of radial-basis function networks and fuzzy if-then systems. An experimental illustration is also presented .

15 citations


Proceedings ArticleDOI
01 Jul 1997
TL;DR: Statistical features (mean and variance) are considered for region-based image segmentation and Ellipsoidal attractors are transformed to approximately linear structures which are well separated and enable a robust segmentation.
Abstract: Statistical features (mean and variance) are considered for region-based image segmentation. These features contain self similar structures which we interpret as fractal objects. Ellipsoidal attractors are transformed to approximately linear structures which are well separated and enable a robust segmentation. We identify class locations using fuzzy c-elliptotypes. The clustering results then yield segmentation using maximum membership defuzzification or, equivalently, a nearest prototype classifier. The method is applied to the digital mammograms from the Mammographic Image Analysis Society and produces reasonable segmentation in all cases.

10 citations


Patent
18 Aug 1997
TL;DR: In this paper, a membership function type is selected from a set of predetermined membership function types, and membership values for the training data vectors and center of gravity values for membership functions are determined in alternation in an iterative method for all training data vector and for each membership function.
Abstract: In an apparatus and method for computerized design of fuzzy rules from training data vectors, a membership function type is selected from a set of predetermined membership function types, and membership values for the training data vectors and center of gravity values for the membership functions are determined in alternation in an iterative method for all training data vectors and for each membership function. When an abort criterion is met, then the most recently determined membership functions, described by the center of gravity values, are employed as rules.

8 citations


Proceedings ArticleDOI
28 Jul 1997
TL;DR: The main contribution of this note is in presenting a new, simpler nonparametric approach that derives a common usable form of data directly from the membership functions.
Abstract: One goal of sensor-fusion methods is the integration of data of various types into a common usable form. Here we seek a uniform framework for the following three types of data: (1) numerical (e.g., x equals 74.1); (2) interval (e.g., x equals [73.9,75.2]); and (3) fuzzy (e.g., x equals tall, where tall is described by a suitable membership function). The problem context of this paper is clustering, which is the problem of separating a set of objects into self-similar groups, but other types of data analysis can be handled similarly. Earlier work on this problem has produced both parametric and nonparametric approaches. The parametric approach is only possible in cases when all the fuzzy data have membership functions coming from a single parametric family of curves, and in that case, the specific parameter values provide numerical data that can easily be used with standard clustering techniques such as the fuzzy c-means algorithm. The more difficult and interesting problem involves the nonparametric case, where there is not a common parametric form for the membership functions. The earlier nonparametric approach produces numerical data for clustering via necessity and possibility values which are derived using a set of `cognitive landmarks'. The main contribution of this note is in presenting a new, simpler nonparametric approach that derives a common usable form of data directly from the membership functions. The new approach is described and then demonstrated using a specific example.

8 citations


Journal ArticleDOI
TL;DR: Determinants of square matrices over the interval [0,1] when ordinary multiplication is replaced by a triangular norm and ordinary addition is replaced with a triangular conorm are studied.