scispace - formally typeset
Search or ask a question

Showing papers by "James C. Bezdek published in 1992"


Journal ArticleDOI
TL;DR: For a more complex segmentation problem with tumor/edema or cerebrospinal fluid boundary, inconsistency in rating among experts was observed, with fuzzy c-means approaches being slightly preferred over feedforward cascade correlation results.
Abstract: Magnetic resonance (MR) brain section images are segmented and then synthetically colored to give visual representations of the original data with three approaches: the literal and approximate fuzzy c-means unsupervised clustering algorithms, and a supervised computational neural network. Initial clinical results are presented on normal volunteers and selected patients with brain tumors surrounded by edema. Supervised and unsupervised segmentation techniques provide broadly similar results. Unsupervised fuzzy algorithms were visually observed to show better segmentation when compared with raw image data for volunteer studies. For a more complex segmentation problem with tumor/edema or cerebrospinal fluid boundary, where the tissues have similar MR relaxation behavior, inconsistency in rating among experts was observed, with fuzz-c-means approaches being slightly preferred over feedforward cascade correlation results. Various facets of both approaches, such as supervised versus unsupervised learning, time complexity, and utility for the diagnostic process, are compared. >

636 citations



Proceedings ArticleDOI
08 Mar 1992
TL;DR: A fuzzy Kohonen clustering network which integrates the fuzzy c-means (FCM) model into the learning rate and updating strategies of the Kohonen network is proposed, and it is proved that the proposed scheme is equivalent to the c-Means algorithms.
Abstract: The authors propose a fuzzy Kohonen clustering network which integrates the fuzzy c-means (FCM) model into the learning rate and updating strategies of the Kohonen network. This yields an optimization problem related to FCM, and the numerical results show improved convergence as well as reduced labeling errors. It is proved that the proposed scheme is equivalent to the c-means algorithms. The new method can be viewed as a Kohonen type of FCM, but it is self-organizing, since the size of the update neighborhood and the learning rate in the competitive layer are automatically adjusted during learning. Anderson's IRIS data were used to illustrate this method. The results are compared with the standard Kohonen approach. >

208 citations


Journal ArticleDOI
TL;DR: It is hoped that a careful discussion of the relationship between systems that exhibit each of these properties will serve to guide rational expectations and the development of models that exhibit or mimic “human behavior.”

114 citations


Journal ArticleDOI
TL;DR: A set of intuitively desirable axioms for a measure of total uncertainty is proposed and several theorems are proved about the new measure, which reduces to Shannon's probabilistic entropy when the basic probability assignment focuses only on singletons.

101 citations


Journal ArticleDOI
TL;DR: The role of and interaction between statistical, fuzzy, and neural-like models for certain problems associated with the three main areas of pattern recognition system design are discussed and some questions concerning fuzzy sets are answered.
Abstract: The role of and interaction between statistical, fuzzy, and neural-like models for certain problems associated with the three main areas of pattern recognition system design are discussed. Some questions concerning fuzzy sets are answered, and the design of fuzzy pattern recognition systems is reviewed. Pattern recognition, statistical pattern recognition and fuzzy pattern recognition systems are described. The use of computational neural-like networks in fuzzy pattern recognition is also discussed. >

68 citations


Journal ArticleDOI
TL;DR: It is shown that fuzzy c-shells generates hyperspherical prototypes to the clusters it finds for certain special cases of the measure of dissimilarity used, and general convergence theory for grouped coordination minimization is applied.
Abstract: R. N. Dave's (1990) version of fuzzy c-shells is an iterative clustering algorithm which requires the application of Newton's method or a similar general optimization technique at each half step in any sequence of iterates for minimizing the associated objective function. An important computational question concerns the accuracy of the solution required at each half step within the overall iteration. The general convergence theory for grouped coordination minimization is applied to this question to show that numerically exact solution of the half-step subproblems in Dave's algorithm is not necessary. One iteration of Newton's method in each coordinate minimization half step yields a sequence obtained using the fuzzy c-shells algorithm with numerically exact coordinate minimization at each half step. It is shown that fuzzy c-shells generates hyperspherical prototypes to the clusters it finds for certain special cases of the measure of dissimilarity used. >

42 citations


Proceedings ArticleDOI
TL;DR: Preliminary results of SFCM (applied to MRI segmentation) suggest that FCM finds the clusters of most interest to the user very accurately when training data is used to guide it.
Abstract: Partial supervision is introduced to the unsupervised fuzzy c-means algorithm (FCM). The resulting algorithm is called semi-supervised fuzzy c-means (SFCM). Labeled data are used as training information to improve FCM's performance. Training data are represented as training columns in SFCM's membership matrix (U), and are allowed to affect the cluster center computations. The degree of supervision is monitored by choosing the number of copies of the training set to be used in SFCM. Preliminary results of SFCM (applied to MRI segmentation) suggest that FCM finds the clusters of most interest to the user very accurately when training data is used to guide it.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

17 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: An unsupervised method, namely fuzzy c- means, that does not require training data sets and produces comparable results is proposed, that is likely to improve MRI contrast and provide greater confidence levels in 3-D visualization of pathology.
Abstract: The use of image intensity based segmentation techniques are proposed to improve MRI contrast and provide greater confidence levels in 3-D visualization of pathology. Pattern recognition methods are proposed using both supervised and unsupervised methods. This paper emphasizes the practical problems in the selection of training data sets for supervised methods that result in instability in segmentation. An unsupervised method, namely fuzzy c- means, that does not require training data sets and produces comparable results is proposed.

17 citations


Proceedings ArticleDOI
01 Nov 1992
TL;DR: Two generalizations of LVQ are presented that are explicitly designed as clustering algorithms: generalized LVQ equals GLVQ; and fuzzy LVQ equal FLVQ, and experiments show that the final centroids produced byGLVQ are independent of node initialization and learning coefficients.
Abstract: This paper discusses the relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. We also discuss the impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often lends itself to clustering algorithms. Then we present two generalizations of LVQ that are explicitly designed as clustering algorithms: we refer to these algorithms as generalized LVQ equals GLVQ; and fuzzy LVQ equals FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. We use Anderson's IRIS data to compare the performance of GLVQ/FLVQ with a standard version of LVQ. Experiments show that the final centroids produced by GLVQ are independent of node initialization and learning coefficients. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution--these are taken care of automatically.

11 citations


Proceedings ArticleDOI
TL;DR: A method for training a standard feed forward, back propagation neural-like network using fuzzy label vectors whose performance goal is to produce edge images from standard imagery such as FLIR, video, and grey tone pictures is proposed.
Abstract: We propose a method for training a standard feed forward, back propagation neural-like network using fuzzy label vectors whose performance goal is to produce edge images from standard imagery such as FLIR, video, and grey tone pictures. Our method is based on training the network on a basis set of edge windows which are scored using the Sobel operator. The method is illustrated by comparing edge images of several real scenes with those derived using the Sobel and Canny image operators.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
TL;DR: A generalization of learning vector quantization is proposed (which the authors shall call a Kohonen clustering network or KCN) which, unlike other methods, updates all the nodes with each input vector.
Abstract: Kohonen-like clustering algorithms (e.g., learning vector quantization) suffer from several major problems. For this class of algorithms, output often depends on the initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such an algorithm, even if it terminates, may not produce meaningful results in terms of prototypes for clustering. This is because it updates only the winner prototype with every input vector. In this paper we propose a generalization of learning vector quantization (which we shall call a Kohonen clustering network or KCN) which, unlike other methods, updates all the nodes with each input vector. Moreover, the network attempts to find a minimum of a well defined objective function. The learning rules depend on the degree of match to the winner node; the lesser the degree of match with the winner, the more is the impact on nonwinner nodes. Our numerical results show that the generated prototypes do not depend on the initialization, learning coefficient, or the number of iterations (provided KCN runs for at least 200 passes through the data). We use Anderson's IRIS data to illustrate our method; and we compare our results with the standard Kohonen approach.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

01 Dec 1992
TL;DR: A fuzzy generalization of a Kohonen learning vector quantization which integrates the Fuzzy c-Means model with the learning rate and updating strategies of the LVQ is used for image segmentation, which is related to the FCM optimization problem.
Abstract: In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.

Book ChapterDOI
01 Jan 1992
TL;DR: The Hard and Fuzzy c-Means algorithms are extended to the case where the (dis)similarity measure on pairs of numerical vectors includes two members of the Minkowski or p-norm family, viz., the p = 1 and p = 00 (or “sup“) norms.
Abstract: We extend the Hard and Fuzzy c-Means (HCMjFCM) c1ustering algorithms to the case where the (dis)similarity measure on pairs of numerical vectors includes two members of the Minkowski or p-norm family, viz., the p = 1 and p = 00 (or “sup“) norms. We note that a basic exchange algorithm due to Bobrowski can be used to find approximate critical points of the new objective functions. This method broadens the applications horizon of the FCM family by enabling users to match “discontinuous“ multidimensional numerical data structures with similarity measures which have nonhyperelliptical topologies. For example, data drawn from a mixture of uniform distributions have sharp or “boxy“ edges; the (p = 1 and p = 00) norms have open and closed sets that match these shapes. We illustrate the technique with a small artificial data set, and compare the results with the c-Means clustering solution produced using the Euclidean (inner product) norm.