scispace - formally typeset
Search or ask a question
Author

Ching Y. Suen

Bio: Ching Y. Suen is an academic researcher from Concordia University. The author has contributed to research in topics: Handwriting recognition & Feature extraction. The author has an hindex of 65, co-authored 511 publications receiving 23594 citations. Previous affiliations of Ching Y. Suen include École de technologie supérieure & Concordia University Wisconsin.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel approach to CME is developed and discussed in detail, using both the proposed data transformation function and the multi-layer perceptron neural net, which increased the recognition rate of three individual classifications considerably.
Abstract: Due to different writing styles and various kinds of noise, the recognition of handwritten numerals is an extremely complicated problem. Recently, a new trend has emerged to tackle this problem by the use of multiple classifiers. This method combines individual classification decisions to derive the final decisions. This is called "Combination of Multiple Classifiers" (CME). In this paper, a novel approach to CME is developed and discussed in detail. It contains two steps: data transformation and data classification. In data transformation, the output values of each classifier are first transformed into a form of likeness measurement. The larger a likeness measurement is, the more probable the corresponding class has the input. In data classification, neural networks have been found very suitable to aggregate the transformed output to produce the final classification decisions. Some strategies for further improving the performance of neural networks have also been proposed in this paper. Experiments with several data transformation functions and data classification approaches have been performed on a large number of handwritten samples. The best result among them is achieved by using both the proposed data transformation function and the multi-layer perceptron neural net, which increased the recognition rate of three individual classifications considerably.

54 citations

Journal ArticleDOI
TL;DR: A fast support vector machine (SVM) training algorithm is proposed under SVM's decomposition framework by effectively integrating kernel caching, digest and shrinking policies and stopping conditions and the promising scalability paves a new way to solve more large-scale learning problems in other domains such as data mining.
Abstract: A fast support vector machine (SVM) training algorithm is proposed under SVM's decomposition framework by effectively integrating kernel caching, digest and shrinking policies and stopping conditions. Kernel caching plays a key role in reducing the number of kernel evaluations by maximal reusage of cached kernel elements. Extensive experiments have been conducted on a large handwritten digit database MNIST to show that the proposed algorithm is much faster than Keerthi et al.'s improved SMO, about nine times. Combined with principal component analysis, the total training for ten one-against-the-rest classifiers on MNIST took less than an hour. Moreover, the proposed fast algorithm speeds up SVM training without sacrificing the generalization performance. The 0.6% error rate on MNIST test set has been achieved. The promising scalability of the proposed scheme paves a new way to solve more large-scale learning problems in other domains such as data mining.

54 citations

Proceedings ArticleDOI
31 Aug 2005
TL;DR: The steerability property of Gabor filters is exploited to reduce the high computation cost resulted from the frequent image filtering, which is a common problem encountered in Gabor filter related applications.
Abstract: Multi-channel Gabor filtering has been widely used in texture classification. In this paper, Gabor filters have been applied to the problem of script identification in printed documents. Our work is divided into two stages. Firstly, a Gabor filter bank is appropriately designed so that extracted rotation-invariant features can handle scripts that are similar in shape and even share many characters. Secondly, the steerability property of Gabor filters is exploited to reduce the high computation cost resulted from the frequent image filtering, which is a common problem encountered in Gabor filter related applications. Results from preliminary experiments are quite promising, where Chinese, Japanese, Korean and English are considered. Over 98.5 % language identification rate can be achieved while image filtering operations have been reduced by 40%.

54 citations

Proceedings ArticleDOI
11 Sep 2006
TL;DR: The experimental results show that compared to original PSO, the GBPSO model can reach broader domains in the search space and converge faster in very high dimensional and complex environments.
Abstract: In this paper, a genetic binary particle swarm optimization (GBPSO) model is proposed, and its performance is compared with the regular binary particle swarm optimizer (PSO), introduced by Kennedy and Eberhart. In the original model, the size of the swarm was fixed. In our model, we introduce birth and death operations in order to make the population very dynamic. Since birth and mortality rates change naturally with time, our model allows oscillations in the size of the population. Compared to the original PSO model, and genetic algorithms, our strategy proposes a more natural simulation of the social behavior of intelligent animals. The experimental results show that compared to original PSO, our GBPSO model can reach broader domains in the search space and converge faster in very high dimensional and complex environments.

54 citations

Journal ArticleDOI
TL;DR: This paper discusses evaluation strategies from several points of view: classification, validation, verification, and performance analysis, noting that formal analysis is replacing (or enhancing) traditional testing of conventional software.
Abstract: The use of expert systems has increased rapidly during the last few years. There is a growing need for systematic and reliable techniques for evaluating both expert system shells and complete expert systems. In this paper, we discuss evaluation strategies from several points of view: classification, validation, verification, and performance analysis. We note that ther are several respects in which expert system evaluation is similar to software evaluation in general and, consequently, that it may be possible to apply established software engineering techniques to expert system evaluation. In particular, formal analysis is replacing (or enhancing) traditional testing of conventional software. We believe that increasing formalization is an important trend and we indicate ways in which it could be carried further.

51 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.
Abstract: The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.

6,527 citations

Journal ArticleDOI
TL;DR: A common theoretical framework for combining classifiers which use distinct pattern representations is developed and it is shown that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision.
Abstract: We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions-the sum rule-outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically.

5,670 citations

Book
01 Jan 1996
TL;DR: Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks in this self-contained account.
Abstract: From the Publisher: Pattern recognition has long been studied in relation to many different (and mainly unrelated) applications, such as remote sensing, computer vision, space research, and medical imaging. In this book Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks. Unifying principles are brought to the fore, and the author gives an overview of the state of the subject. Many examples are included to illustrate real problems in pattern recognition and how to overcome them.This is a self-contained account, ideal both as an introduction for non-specialists readers, and also as a handbook for the more expert reader.

5,632 citations