scispace - formally typeset
Search or ask a question
Author

Q. M. Jonathan Wu

Bio: Q. M. Jonathan Wu is an academic researcher from University of Windsor. The author has contributed to research in topics: Feature extraction & Image segmentation. The author has an hindex of 43, co-authored 323 publications receiving 7298 citations. Previous affiliations of Q. M. Jonathan Wu include Hangzhou Dianzi University & Indian Institute of Technology Roorkee.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper introduces a new generalized hierarchical FCM (GHFCM), which is more robust to image noise with the spatial constraints: the generalized mean, and introduces a more flexibility function which considers the distance function itself as a sub-FCM.
Abstract: Fuzzy c-means (FCM) has been considered as an effective algorithm for image segmentation. However, it still suffers from two problems: one is insufficient robustness to image noise, and the other is the Euclidean distance in FCM, which is sensitive to outliers. In this paper, we propose two new algorithms, generalized FCM (GFCM) and hierarchical FCM (HFCM), to solve these two problems. Traditional FCM can be considered as a linear combination of membership and distance from the expression of its mathematical formula. GFCM is generated by applying generalized mean on these two items. We impose generalized mean on membership to incorporate local spatial information and cluster information, and on distance function to incorporate local spatial information and image intensity value. Thus, our GFCM is more robust to image noise with the spatial constraints: the generalized mean. To solve the second problem caused by Euclidean distance (l2 norm), we introduce a more flexibility function which considers the distance function itself as a sub-FCM. Furthermore, the sub-FCM distance function in HFCM is general and flexible enough to deal with non-Euclidean data. Finally, we combine these two algorithms to introduce a new generalized hierarchical FCM (GHFCM). Experimental results demonstrate the improved robustness and effectiveness of the proposed algorithm.

434 citations

Journal ArticleDOI
TL;DR: A novel fusion framework is proposed for multimodal medical images based on non-subsampled contourlet transform (NSCT) to enable more accurate analysis of multimodality images.
Abstract: Multimodal medical image fusion, as a powerful tool for the clinical applications, has developed with the advent of various imaging modalities in medical imaging. The main motivation is to capture most relevant information from sources into a single output, which plays an important role in medical diagnosis. In this paper, a novel fusion framework is proposed for multimodal medical images based on non-subsampled contourlet transform (NSCT). The source medical images are first transformed by NSCT followed by combining low- and high-frequency components. Two different fusion rules based on phase congruency and directive contrast are proposed and used to fuse low- and high-frequency coefficients. Finally, the fused image is constructed by the inverse NSCT with all composite coefficients. Experimental results and comparative study show that the proposed fusion framework provides an effective way to enable more accurate analysis of multimodality images. Further, the applicability of the proposed framework is carried out by the three clinical examples of persons affected with Alzheimer, subacute stroke and recurrent tumor.

381 citations

Journal ArticleDOI
TL;DR: A fast image similarity measurement based on random verification is proposed to efficiently implement copy detection and the proposed method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.
Abstract: To detect illegal copies of copyrighted images, recent copy detection methods mostly rely on the bag-of-visual-words (BOW) model, in which local features are quantized into visual words for image matching. However, both the limited discriminability of local features and the BOW quantization errors will lead to many false local matches, which make it hard to distinguish similar images from copies. Geometric consistency verification is a popular technology for reducing the false matches, but it neglects global context information of local features and thus cannot solve this problem well. To address this problem, this paper proposes a global context verification scheme to filter false matches for copy detection. More specifically, after obtaining initial scale invariant feature transform (SIFT) matches between images based on the BOW quantization, the overlapping region-based global context descriptor (OR-GCD) is proposed for the verification of these matches to filter false matches. The OR-GCD not only encodes relatively rich global context information of SIFT features but also has good robustness and efficiency. Thus, it allows an effective and efficient verification. Furthermore, a fast image similarity measurement based on random verification is proposed to efficiently implement copy detection. In addition, we also extend the proposed method for partial-duplicate image detection. Extensive experiments demonstrate that our method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.

332 citations

Journal ArticleDOI
TL;DR: A new human face recognition algorithm based on bidirectional two dimensional principal component analysis (B2DPCA) and extreme learning machine (ELM) and a subband that exhibits a maximum standard deviation is dimensionally reduced using an improved dimensionality reduction technique.
Abstract: In this work, a new human face recognition algorithm based on bidirectional two dimensional principal component analysis (B2DPCA) and extreme learning machine (ELM) is introduced. The proposed method is based on curvelet image decomposition of human faces and a subband that exhibits a maximum standard deviation is dimensionally reduced using an improved dimensionality reduction technique. Discriminative feature sets are generated using B2DPCA to ascertain classification accuracy. Other notable contributions of the proposed work include significant improvements in classification rate, up to hundred folds reduction in training time and minimal dependence on the number of prototypes. Extensive experiments are performed using challenging databases and results are compared against state of the art techniques.

308 citations

Journal ArticleDOI
TL;DR: A new image indexing and retrieval algorithm using local mesh patterns are proposed for biomedical image retrieval application that shows a significant improvement in terms of their evaluation measures as compared to LBP, LBP with Gabor transform, and other spatial and transform domain methods.
Abstract: In this paper, a new image indexing and retrieval algorithm using local mesh patterns are proposed for biomedical image retrieval application. The standard local binary pattern encodes the relationship between the referenced pixel and its surrounding neighbors, whereas the proposed method encodes the relationship among the surrounding neighbors for a given referenced pixel in an image. The possible relationships among the surrounding neighbors are depending on the number of neighbors, P. In addition, the effectiveness of our algorithm is confirmed by combining it with the Gabor transform. To prove the effectiveness of our algorithm, three experiments have been carried out on three different biomedical image databases. Out of which two are meant for computer tomography (CT) and one for magnetic resonance (MR) image retrieval. It is further mentioned that the database considered for three experiments are OASIS-MRI database, NEMA-CT database, and VIA/I-ELCAP database which includes region of interest CT images. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP, LBP with Gabor transform, and other spatial and transform domain methods.

193 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2006

3,012 citations

Proceedings Article
01 Jan 1989
TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Abstract: A scheme is developed for classifying the types of motion perceived by a humanlike robot. It is assumed that the robot receives visual images of the scene using a perspective system model. Equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented. >

2,000 citations