scispace - formally typeset
Search or ask a question
Author

Seng Huat Ong

Bio: Seng Huat Ong is an academic researcher from UCSI University. The author has contributed to research in topics: Negative binomial distribution & Beta-binomial distribution. The author has an hindex of 17, co-authored 99 publications receiving 2575 citations. Previous affiliations of Seng Huat Ong include University of Kuala Lumpur & University of Malaya.


Papers
More filters
Journal ArticleDOI
TL;DR: A new set of orthogonal moment functions based on the discrete Tchebichef polynomials is introduced, superior to the conventional Orthogonal moments such as Legendre moments and Zernike moments, in terms of preserving the analytical properties needed to ensure information redundancy in a moment set.
Abstract: This paper introduces a new set of orthogonal moment functions based on the discrete Tchebichef polynomials. The Tchebichef moments can be effectively used as pattern features in the analysis of two-dimensional images. The implementation of the moments proposed in this paper does not involve any numerical approximation, since the basis set is orthogonal in the discrete domain of the image coordinate space. This property makes Tchebichef moments superior to the conventional orthogonal moments such as Legendre moments and Zernike moments, in terms of preserving the analytical properties needed to ensure information redundancy in a moment set. The paper also details the various computational aspects of Tchebichef moments and demonstrates their feature representation capability using the method of image reconstruction.

865 citations

Journal ArticleDOI
TL;DR: It is shown that the Krawtchouk moments can be employed to extract local features of an image, unlike other orthogonal moments, which generally capture the global features.
Abstract: A new set of orthogonal moments based on the discrete classical Krawtchouk polynomials is introduced. The Krawtchouk polynomials are scaled to ensure numerical stability, thus creating a set of weighted Krawtchouk polynomials. The set of proposed Krawtchouk moments is then derived from the weighted Krawtchouk polynomials. The orthogonality of the proposed moments ensures minimal information redundancy. No numerical approximation is involved in deriving the moments, since the weighted Krawtchouk polynomials are discrete. These properties make the Krawtchouk moments well suited as pattern features in the analysis of two-dimensional images. It is shown that the Krawtchouk moments can be employed to extract local features of an image, unlike other orthogonal moments, which generally capture the global features. The computational aspects of the moments using the recursive and symmetry properties are discussed. The theoretical framework is validated by an experiment on image reconstruction using Krawtchouk moments and the results are compared to that of Zernike, pseudo-Zernike, Legendre, and Tchebyscheff moments. Krawtchouk moment invariants are constructed using a linear combination of geometric moment invariants; an object recognition experiment shows Krawtchouk moment invariants perform significantly better than Hu's moment invariants in both noise-free and noisy conditions.

610 citations

Journal ArticleDOI
TL;DR: It is shown how Hahn moments, as a generalization of Chebyshev and Krawtchouk moments, can be used for global and local feature extraction and incorporated into the framework of normalized convolution to analyze local structures of irregularly sampled signals.
Abstract: This paper shows how Hahn moments provide a unified understanding of the recently introduced Chebyshev and Krawtchouk moments. The two latter moments can be obtained as particular cases of Hahn moments with the appropriate parameter settings and this fact implies that Hahn moments encompass all their properties. The aim of this paper is twofold: (1) To show how Hahn moments, as a generalization of Chebyshev and Krawtchouk moments, can be used for global and local feature extraction and (2) to show how Hahn moments can be incorporated into the framework of normalized convolution to analyze local structures of irregularly sampled signals.

192 citations

Journal ArticleDOI
TL;DR: This paper applies the credit scoring techniques using data of payment history of members from a recreational club to improve assessment of credit worthiness using credit scoring models.
Abstract: Credit scoring model have been developed by banks and researchers to improve the process of assessing credit worthiness during the credit evaluation process. The objective of credit scoring models is to assign credit risk to either a ''good risk'' group that is likely to repay financial obligation or a ''bad risk'' group who has high possibility of defaulting on the financial obligation. Construction of credit scoring models requires data mining techniques. Using historical data on payments, demographic characteristics and statistical techniques, credit scoring models can help identify the important demographic characteristics related to credit risk and provide a score for each customer. This paper illustrates using data mining to improve assessment of credit worthiness using credit scoring models. Due to privacy concerns and unavailability of real financial data from banks this study applies the credit scoring techniques using data of payment history of members from a recreational club. The club has been facing a problem of rising number in defaulters in their monthly club subscription payments. The management would like to have a model which they can deploy to identify potential defaulters. The classification performance of credit scorecard model, logistic regression model and decision tree model were compared. The classification error rates for credit scorecard model, logistic regression and decision tree were 27.9%, 28.8% and 28.1%, respectively. Although no model outperforms the other, scorecards are relatively much easier to deploy in practical applications.

162 citations

01 Jun 2001
TL;DR: A comparative analysis between Legendre and Zernike moments and a new set of discrete orthogonal moments based on Tchebichef polynomials is presented, and experimental results using both binary and gray-level images are included to show the advantages of discrete Orthogonal Moments over continuous moments.
Abstract: − Image feature representation techniques using orthogonal moment functions have been used in many applications such as invariant pattern recognition, object identification and image reconstruction. Legendre and Zernike moments are very popular in this class, owing to their feature representation capability with a minimal information redundancy measure. This paper presents a comparative analysis between these moments and a new set of discrete orthogonal moments based on Tchebichef polynomials. The implementation aspects of orthogonal moments are discussed, and experimental results using both binary and gray-level images are included to show the advantages of discrete orthogonal moments over continuous moments.

63 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An approximate conditional and joint association analysis that can use summary-level statistics from a meta-analysis of genome-wide association studies (GWAS) and estimated linkage disequilibrium (LD) from a reference sample with individual-level genotype data is presented.
Abstract: We present an approximate conditional and joint association analysis that can use summary-level statistics from a meta-analysis of genome-wide association studies (GWAS) and estimated linkage disequilibrium (LD) from a reference sample with individual-level genotype data. Using this method, we analyzed meta-analysis summary data from the GIANT Consortium for height and body mass index (BMI), with the LD structure estimated from genotype data in two independent cohorts. We identified 36 loci with multiple associated variants for height (38 leading and 49 additional SNPs, 87 in total) via a genome-wide SNP selection procedure. The 49 new SNPs explain approximately 1.3% of variance, nearly doubling the heritability explained at the 36 loci. We did not find any locus showing multiple associated SNPs for BMI. The method we present is computationally fast and is also applicable to case-control data, which we demonstrate in an example from meta-analysis of type 2 diabetes by the DIAGRAM Consortium.

1,352 citations

Book
18 Feb 2002
TL;DR: The new edition of Feature Extraction and Image Processing provides an essential guide to the implementation of image processing and computer vision techniques, explaining techniques and fundamentals in a clear and concise manner, and features a companion website that includes worksheets, links to free software, Matlab files, solutions and new demonstrations.
Abstract: Image processing and computer vision are currently hot topics with undergraduates and professionals alike. "Feature Extraction and Image Processing" provides an essential guide to the implementation of image processing and computer vision techniques, explaining techniques and fundamentals in a clear and concise manner. Readers can develop working techniques, with usable code provided throughout and working Matlab and Mathcad files on the web. Focusing on feature extraction while also covering issues and techniques such as image acquisition, sampling theory, point operations and low-level feature extraction, the authors have a clear and coherent approach that will appeal to a wide range of students and professionals.The new edition includes: a new coverage of curvature in low-level feature extraction (SIFT and saliency) and features (phase congruency); geometric active contours; morphology; and camera models and an updated coverage of image smoothing (anistropic diffusion); skeletonization; edge detection; curvature; and shape descriptions (moments). It is an essential reading for engineers and students working in this cutting edge field. It is an ideal module text and background reference for courses in image processing and computer vision. It features a companion website that includes worksheets, links to free software, Matlab files, solutions and new demonstrations.

929 citations

Book ChapterDOI
01 Nov 2008
TL;DR: Content-based image retrieval (CBIR), emerged as a promising mean for retrieving images and browsing large images databases and is the process of retrieving images from a collection based on automatically extracted features.
Abstract: "A picture is worth one thousand words". This proverb comes from Confucius a Chinese philosopher before about 2500 years ago. Now, the essence of these words is universally understood. A picture can be magical in its ability to quickly communicate a complex story or a set of ideas that can be recalled by the viewer later in time. Visual information plays an important role in our society, it will play an increasingly pervasive role in our lives, and there will be a growing need to have these sources processed further. The pictures or images are used in many application areas like architectural and engineering design, fashion, journalism, advertising, entertainment, etc. Thus it provides the necessary opportunity for us to use the abundance of images. However, the knowledge will be useless if one can't _nd it. In the face of the substantive and increasing apace images, how to search and to retrieve the images that we interested with facility is a fatal problem: it brings a necessity for image retrieval systems. As we know, visual features of the images provide a description of their content. Content-based image retrieval (CBIR), emerged as a promising mean for retrieving images and browsing large images databases. CBIR has been a topic of intensive research in recent years. It is the process of retrieving images from a collection based on automatically extracted features.

727 citations

Journal Article
TL;DR: Alho and Spencer as discussed by the authors published a book on statistical and mathematical demography, focusing on mature population models, the particular focus of the new author (see, e.g., Caswell 2000).
Abstract: Here are two books on a topic new to Technometrics: statistical and mathematical demography. The first author of Applied Mathematical Demography wrote the first two editions of this book alone. The second edition was published in 1985. Professor Keyfritz noted in the Preface (p. vii) that at age 90 he had no interest in doing another edition; however, the publisher encouraged him to find a coauthor. The result is an additional focus for the book in the world of biology that makes it much more relevant for the sciences. The book is now part of the publisher’s series on Statistics for Biology and Health. Much of it, of course, focuses on the many aspects of human populations. The new material focuses on mature population models, the particular focus of the new author (see, e.g., Caswell 2000). As one might expect from a book that was originally written in the 1970s, it does not include a lot of information on statistical computing. The new book by Alho and Spencer is focused on putting a better emphasis on statistics in the discipline of demography (Preface, p. vii). It is part of the publisher’s Series in Statistics. The authors are both statisticians, so the focus is on statistics as used for demographic problems. The authors are targeting human applications, so their perspective on science does not extend any further than epidemiology. The book actually strikes a good balance between statistical tools and demographic applications. The authors use the first two chapters to teach statisticians about the concepts of demography. The next four chapters are very similar to the statistics content found in introductory books on survival analysis, such as the recent book by Kleinbaum and Klein (2005), reported by Ziegel (2006). The next three chapters are focused on various aspects of forecasting demographic rates. The book concludes with chapters focusing on three areas of applications: errors in census numbers, financial applications, and small-area estimates.

710 citations