scispace - formally typeset
Search or ask a question
Book•

Pattern Recognition with Fuzzy Objective Function Algorithms

31 Jul 1981-
TL;DR: Books, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with, becomes what you need to get.
Abstract: New updated! The latest book from a very famous author finally comes out. Book of pattern recognition with fuzzy objective function algorithms, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.
Citations
More filters
Journal Article•DOI•
TL;DR: A general approach to quali- tative modeling based on fuzzy logic is discussed, which proposes to use a fuzzy clustering method (fuzzy c-means method) to identify the structure of a fuzzy model.
Abstract: This paper discusses a general approach to quali- tative modeling based on fuzzy logic. The method of qualitative modeling is divided into two parts: fuzzy modeling and linguistic approximation. It proposes to use a fuzzy clustering method (fuzzy c-means method) to identify the structure of a fuzzy model. To clarify the advantages of the proposed method, it also shows some examples of modeling, among them a model of a dynamical process and a model of a human operator's control action.

2,447 citations

Journal Article•DOI•
TL;DR: An appropriate objective function whose minimum will characterize a good possibilistic partition of the data is constructed, and the membership and prototype update equations are derived from necessary conditions for minimization of the criterion function.
Abstract: The clustering problem is cast in the framework of possibility theory. The approach differs from the existing clustering methods in that the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values can be interpreted as degrees of possibility of the points belonging to the classes, i.e., the compatibilities of the points with the class prototypes. An appropriate objective function whose minimum will characterize a good possibilistic partition of the data is constructed, and the membership and prototype update equations are derived from necessary conditions for minimization of the criterion function. The advantages of the resulting family of possibilistic algorithms are illustrated by several examples. >

2,388 citations

Journal Article•DOI•
TL;DR: This paper surveys and summarizes previous works that investigated the clustering of time series data in various application domains, including general-purpose clustering algorithms commonly used in time series clustering studies.

2,336 citations


Cites background from "Pattern Recognition with Fuzzy Obje..."

  • ...Keywords:Time series data; Clustering; Distance measure; Data mining...

    [...]

  • ...This procedure works only with time series with equal length because the distance between two time series at some cross sections (time points where one series does not have value) is ill defined....

    [...]

Journal Article•DOI•
01 Jul 1985
TL;DR: The theory of fuzzy sets is introduced into the K-nearest neighbor technique to develop a fuzzy version of the algorithm, and three methods of assigning fuzzy memberships to the labeled samples are proposed.
Abstract: Classification of objects is an important area of research and application in a variety of fields. In the presence of full knowledge of the underlying probabilities, Bayes decision theory gives optimal error rates. In those cases where this information is not present, many algorithms make use of distance or similarity among samples as a means of classification. The K-nearest neighbor decision rule has often been used in these pattern recognition problems. One of the difficulties that arises when utilizing this technique is that each of the labeled samples is given equal importance in deciding the class memberships of the pattern to be classified, regardless of their `typicalness'. The theory of fuzzy sets is introduced into the K-nearest neighbor technique to develop a fuzzy version of the algorithm. Three methods of assigning fuzzy memberships to the labeled samples are proposed, and experimental results and comparisons to the crisp version are presented.

2,323 citations


Cites background from "Pattern Recognition with Fuzzy Obje..."

  • ...Thus the impetus behind the introduction of fuzzy set theory was to provide a means of defining categories that are inherently imprecise [24]....

    [...]

  • ...In [24] Bezdek suggests that interesting and useful algorithms could result from the allocation of fuzzy class membership to the input vector, thus affording fuzzy decisions based on fuzzy labels....

    [...]

  • ...Since that time researchers have found numerous ways to utilize this theory to generalize existing techniques and to develop new algorithms in pattern recognition and decision analysis [24]-[27]....

    [...]

Journal Article•DOI•
TL;DR: Two algorithms which extend the k-means algorithm to categorical domains and domains with mixed numeric and categorical values are presented and are shown to be efficient when clustering large data sets, which is critical to data mining applications.
Abstract: The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present two algorithms which extend the k-means algorithm to categorical domains and domains with mixed numeric and categorical values. The k-modes algorithm uses a simple matching dissimilarity measure to deal with categorical objects, replaces the means of clusters with modes, and uses a frequency-based method to update modes in the clustering process to minimise the clustering cost function. With these extensions the k-modes algorithm enables the clustering of categorical data in a fashion similar to k-means. The k-prototypes algorithm, through the definition of a combined dissimilarity measure, further integrates the k-means and k-modes algorithms to allow for clustering objects described by mixed numeric and categorical attributes. We use the well known soybean disease and credit approval data sets to demonstrate the clustering performance of the two algorithms. Our experiments on two real world data sets with half a million objects each show that the two algorithms are efficient when clustering large data sets, which is critical to data mining applications.

2,289 citations


Cites methods from "Pattern Recognition with Fuzzy Obje..."

  • ...The sophisticated variants of thek-m ans algorithm include the well-known ISODATA algorithm (Ball and Hall, 1967) and the fuzzyk-means algorithms (Ruspini, 1969, 1973; Bezdek, 1981)....

    [...]

  • ...The sophisticated variants of the k-m ans algorithm include the well-known ISODATA algorithm (Ball and Hall, 1967) and the fuzzy k-means algorithms (Ruspini, 1969, 1973; Bezdek, 1981)....

    [...]

References
More filters
Journal Article•DOI•

14,009 citations


"Pattern Recognition with Fuzzy Obje..." refers methods in this paper

  • ...Fisher(38) first used it to exemplify linear discriminant analysis....

    [...]

  • ...The Iris data have been used as a test set by at least a dozen authors, including Fisher, (38) Kendall, (64) Friedman and Rubin,'40) Wolfe,(117) Scott and Symons,<95) and Backer....

    [...]

Journal Article•DOI•
TL;DR: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points, so it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.
Abstract: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points. This rule is independent of the underlying joint distribution on the sample points and their classifications, and hence the probability of error R of such a rule must be at least as great as the Bayes probability of error R^{\ast} --the minimum probability of error over all decision rules taking underlying probability structure into account. However, in a large sample analysis, we will show in the M -category case that R^{\ast} \leq R \leq R^{\ast}(2 --MR^{\ast}/(M-1)) , where these bounds are the tightest possible, for all suitably smooth underlying distributions. Thus for any number of categories, the probability of error of the nearest neighbor rule is bounded above by twice the Bayes probability of error. In this sense, it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.

12,243 citations


"Pattern Recognition with Fuzzy Obje..." refers background or methods in this paper

  • ...In fact, the efficiency of the 1-NN classifier is asymptotically less than twice the theoretically optimal Bayes risk: Cover and Hart showed in (29) that...

    [...]

  • ...4) can be improved: the tighter upper bound derived in (29) is...

    [...]

  • ...In particular, no analysis such as Cover and Hart's(29) has been formulated for fuzzy classifier designs; whether fuzzy classifiers such as {hb}PCM in (S26) and {!Jh-NP in (S27) have nice asymptotic relations to {fjb} or others remains to be discovered....

    [...]

Book•
01 Jan 1972
TL;DR: This completely revised second edition presents an introduction to statistical pattern recognition, which is appropriate as a text for introductory courses in pattern recognition and as a reference book for workers in the field.
Abstract: This completely revised second edition presents an introduction to statistical pattern recognition Pattern recognition in general covers a wide range of problems: it is applied to engineering problems, such as character readers and wave form analysis as well as to brain modeling in biology and psychology Statistical decision and estimation, which are the main subjects of this book, are regarded as fundamental to the study of pattern recognition This book is appropriate as a text for introductory courses in pattern recognition and as a reference book for workers in the field Each chapter contains computer projects as well as exercises

10,526 citations


"Pattern Recognition with Fuzzy Obje..." refers background in this paper

  • ...(42) Texts on general pattern recognition include those of Bongard,(2S) Patrick,IH1) Tou and Wilcox,(04) Tou and Gonzalez,(l03) and Duda and Hart....

    [...]

01 Jan 1973
TL;DR: In this paper, two fuzzy versions of the k-means optimal, least squared error partitioning problem are formulated for finite subsets X of a general inner product space, and the extremizing solutions are shown to be fixed points of a certain operator T on the class of fuzzy, k-partitions of X, and simple iteration of T provides an algorithm which has the descent property relative to the LSE criterion function.
Abstract: Two fuzzy versions of the k-means optimal, least squared error partitioning problem are formulated for finite subsets X of a general inner product space. In both cases, the extremizing solutions are shown to be fixed points of a certain operator T on the class of fuzzy, k-partitions of X, and simple iteration of T provides an algorithm which has the descent property relative to the least squared error criterion function. In the first case, the range of T consists largely of ordinary (i.e. non-fuzzy) partitions of X and the associated iteration scheme is essentially the well known ISODATA process of Ball and Hall. However, in the second case, the range of T consists mainly of fuzzy partitions and the associated algorithm is new; when X consists of k compact well separated (CWS) clusters, Xi , this algorithm generates a limiting partition with membership functions which closely approximate the characteristic functions of the clusters Xi . However, when X is not the union of k CWS clusters, the limi...

5,254 citations

Book•
01 Dec 1973

5,169 citations


"Pattern Recognition with Fuzzy Obje..." refers background or methods in this paper

  • ...1e) is quite involved; and sizes of local error (el), loop error (ed, and a measure of closeness for matrices in Ven (1IU(1+1) - u(1)11> must be chosen....

    [...]

  • ...Clustering, for example, is ably represented by the books of Anderberg,(1) Tryon and Bailey,(I09) and Hartigan....

    [...]

  • ...l that groups together (1,3) and (10, 3) and minimizes Iw( U, v)....

    [...]

  • ...X = {(1, 1), (1, 3), (10, 1), (10, 3), (5, 2)}....

    [...]

  • ...[Answer {(1, 1), (1, 3)} u {(5, 2)} u {(la, 1), (10, 3))....

    [...]