Yuan Yan Tang
Other affiliations: Hong Kong Community College, Southwest Baptist University, Concordia University ...read more
Bio: Yuan Yan Tang is an academic researcher from University of Macau. The author has contributed to research in topics: Wavelet & Wavelet transform. The author has an hindex of 58, co-authored 647 publications receiving 12835 citations. Previous affiliations of Yuan Yan Tang include Hong Kong Community College & Southwest Baptist University.
Papers published on a yearly basis
TL;DR: An effective small target detection algorithm inspired by the contrast mechanism of human vision system and derived kernel model is presented, which can improve the SNR of the image significantly.
Abstract: Robust small target detection of low signal-to-noise ratio (SNR) is very important in infrared search and track applications for self-defense or attacks. Consequently, an effective small target detection algorithm inspired by the contrast mechanism of human vision system and derived kernel model is presented in this paper. At the first stage, the local contrast map of the input image is obtained using the proposed local contrast measure which measures the dissimilarity between the current location and its neighborhoods. In this way, target signal enhancement and background clutter suppression are achieved simultaneously. At the second stage, an adaptive threshold is adopted to segment the target. The experiments on two sequences have validated the detection capability of the proposed target detection method. Experimental evaluation results show that our method is simple and effective with respect to detection accuracy. In particular, the proposed method can improve the SNR of the image significantly.
TL;DR: Theoretical analysis and experimental results validate that gradient faces is an illumination insensitive measure, and robust to different illumination, including uncontrolled, natural lighting, and is also insensitive to image noise and object artifacts.
Abstract: In this correspondence, we propose a novel method to extract illumination insensitive features for face recognition under varying lighting called the gradient faces. Theoretical analysis shows gradient faces is an illumination insensitive measure, and robust to different illumination, including uncontrolled, natural lighting. In addition, gradient faces is derived from the image gradient domain such that it can discover underlying inherent structure of face images since the gradient domain explicitly considers the relationships between neighboring pixel points. Therefore, gradient faces has more discriminating power than the illumination insensitive measure extracted from the pixel domain. Recognition rates of 99.83% achieved on PIE database of 68 subjects, 98.96% achieved on Yale B of ten subjects, and 95.61% achieved on Outdoor database of 132 subjects under uncontrolled natural lighting conditions show that gradient faces is an effective method for face recognition under varying illumination. Furthermore, the experimental results on Yale database validate that gradient faces is also insensitive to image noise and object artifacts (such as facial expressions).
TL;DR: The intermittent fault-tolerance scheme is taken into fully account in designing a reliable asynchronous sampled-data controller, which ensures such that the resultant neural networks is asymptotically stable.
TL;DR: The proposed HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework and can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data.
Abstract: How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.
TL;DR: A modified loose-looped fuzzy membership functions (FMFs) dependent Lyapunov-Krasovskii functional (LKF) is constructed based on the information of the time derivative of FMFs, which involves not only a signal transmission delay but also switched topologies.
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
01 Jan 2015
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.
01 Feb 1977
01 Jan 1979